RT/ AI system for high precision recognition of hand gestures

Paradigm
Paradigm
Published in
26 min readAug 20, 2020

Robotics biweekly vol.11, 6th August — 20th August

TL;DR

  • Scientists have developed an Artificial Intelligence (AI) system that recognises hand gestures by combining skin-like electronics with computer vision.
  • Researchers used artificial intelligence and genetic analyses to examine the structure of the inner surface of the heart using 25,000 MRI scans. They found that the complex network of muscle fibers lining the inside of the heart, called trabeculae, allows blood to flow more efficiently and can influence the risk of heart failure. The study answers very old questions in basic human physiology and leads to new directions for understanding heart diseases.
  • Scientists have developed a 1cm by 1cm wireless artificial aquatic polyp, which can remove contaminants from water. Apart from cleaning, this soft robot could be also used in medical diagnostic devices by aiding in picking up and transporting specific cells for analysis.
  • Although true ‘cyborgs’ are science fiction, researchers are moving toward integrating electronics with the body. Such devices could monitor tumors or replace damaged tissues. But connecting electronics directly to human tissues in the body is a huge challenge. Today, a team is reporting new coatings for components that could help them more easily fit into this environment.
  • People rarely use just one sense to understand the world, but robots usually only rely on vision and, increasingly, touch. Researchers find that robot perception could improve markedly by adding another sense: hearing.
  • Researchers have designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive.
  • Graphene buckles when cooled while attached to a flat surface, resulting in pucker patterns that could benefit the search for novel quantum materials and superconductors, according to new research.
  • Engineers have developed a flexible, portable measurement system to support design and repeatable laboratory testing of fifth-generation (5G) wireless communications devices with unprecedented accuracy across a wide range of signal frequencies and scenarios.
  • NASA JPL are developing autonomous capabilities that could allow future Mars rovers to go farther, faster and do more science. Training machine learning models on the Maverick2, their team developed and optimized models for Drive-By Science and Energy-Optimal Autonomous Navigation.
  • The Multi-robot Systems Group at FEE-CTU in Prague is working on an autonomous drone that detects fires and the shoots an extinguisher capsule at them.
  • The experiment with HEAP (Hydraulic Excavator for Autonomous Purposes) demonstrates the latest research in on-site and mobile digital fabrication with found materials. The embankment prototype in natural granular material was achieved using state of the art design and construction processes in mapping, modelling, planning and control. The entire process of building the embankment was fully autonomous. An operator was only present in the cabin for safety purposes.
  • The Simulation, Systems Optimization and Robotics Group (SIM) of Technische Universität Darmstadt’s Department of Computer Science conducts research on cooperating autonomous mobile robots, biologically inspired robots and numerical optimization and control methods.
  • Check out robotics upcoming events (mostly virtual) below. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Latest Researches

Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors

by Ming Wang, Zheng Yan, Ting Wang, Pingqiang Cai, Siyu Gao, Yi Zeng, Changjin Wan, Hong Wang, Liang Pan, Jiancan Yu, Shaowu Pan, Ke He, Jie Lu, Xiaodong Chen in Nature Electronics

Scientists have developed an Artificial Intelligence (AI) system that recognises hand gestures by combining skin-like electronics with computer vision.

The recognition of human hand gestures by AI systems has been a valuable development over the last decade and has been adopted in high-precision surgical robots, health monitoring equipment and in gaming systems.

AI gesture recognition systems that were initially visual-only have been improved upon by integrating inputs from wearable sensors, an approach known as ‘data fusion’. The wearable sensors recreate the skin’s sensing ability, one of which is known as ‘somatosensory’.

However, gesture recognition precision is still hampered by the low quality of data arriving from wearable sensors, typically due to their bulkiness and poor contact with the user, and the effects of visually blocked objects and poor lighting. Further challenges arise from the integration of visual and sensory data as they represent mismatched datasets that must be processed separately and then merged at the end, which is inefficient and leads to slower response times.

To tackle these challenges, the NTU team created a ‘bioinspired’ data fusion system that uses skin-like stretchable strain sensors made from single-walled carbon nanotubes, and an AI approach that resembles the way that the skin senses and vision are handled together in the brain.

The NTU scientists developed their bio-inspired AI system by combining three neural network approaches in one system: they used a ‘convolutional neural network’, which is a machine learning method for early visual processing, a multilayer neural network for early somatosensory information processing, and a ‘sparse neural network’ to ‘fuse’ the visual and somatosensory information together.

The result is a system that can recognise human gestures more accurately and efficiently than existing methods.

Lead author of the study, Professor Chen Xiaodong, from the School of Materials Science and Engineering at NTU, said, “Our data fusion architecture has its own unique bioinspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. We believe such features make our architecture unique to existing approaches.”

“Compared to rigid wearable sensors that do not form an intimate enough contact with the user for accurate data collection, our innovation uses stretchable strain sensors that comfortably attaches onto the human skin. This allows for high-quality signal acquisition, which is vital to high-precision recognition tasks,” added Prof Chen, who is also Director of the Innovative Centre for Flexible Devices (iFLEX) at NTU.

The team comprising scientists from NTU Singapore and the University of Technology Sydney (UTS) published their findings in the scientific journal Nature Electronics in June.

High recognition accuracy even in poor environmental conditions

To capture reliable sensory data from hand gestures, the research team fabricated a transparent, stretchable strain sensor that adheres to the skin but cannot be seen in camera images.

As a proof of concept, the team tested their bio-inspired AI system using a robot controlled through hand gestures and guided it through a maze.

Results showed that hand gesture recognition powered by the bio-inspired AI system was able to guide the robot through the maze with zero errors, compared to six recognition errors made by a visual-based recognition system.

High accuracy was also maintained when the new AI system was tested under poor conditions including noise and unfavourable lighting. The AI system worked effectively in the dark, achieving a recognition accuracy of over 96.7 per cent.

First author of the study, Dr Wang Ming from the School of Materials Science & Engineering at NTU Singapore, said, “The secret behind the high accuracy in our architecture lies in the fact that the visual and somatosensory information can interact and complement each other at an early stage before carrying out complex interpretation. As a result, the system can rationally collect coherent information with less redundant data and less perceptual ambiguity, resulting in better accuracy.”

Providing an independent view, Professor Markus Antonietti, Director of Max Planck Institute of Colloids and Interfaces in Germany said, “The findings from this paper bring us another step forward to a smarter and more machine-supported world. Much like the invention of the smartphone which has revolutionised society, this work gives us hope that we could one day physically control all of our surrounding world with great reliability and precision through a gesture.”

“There are simply endless applications for such technology in the marketplace to support this future. For example, from a remote robot control over smart workplaces to exoskeletons for the elderly.”

An artificial aquatic polyp that wirelessly attracts, grasps, and releases objects

by Marina Pilz da Cunha, Harkamaljot S. Kandail, Jaap M. J. den Toonder, Albert P. H. J. Schenning in Proceedings of the National Academy of Sciences

Scientists have developed a 1cm by 1cm wireless artificial aquatic polyp, which can remove contaminants from water. Apart from cleaning, this soft robot could be also used in medical diagnostic devices by aiding in picking up and transporting specific cells for analysis.

Design of artificial aquatic polyp. (A) Photograph of a marine polyp, reproduced with permission from photographer Robin Jeffries. (B) The artificial polyp inspired by the design of marine polyps. The device is composed of two LCN films with planar alignment that operate as the device’s grasping “arms.” The LCNs are connected to a flexible PDMS/iron oxide pillar with a drop of UV-curable glue. The LCN is a highly cross-linked network containing azobenzene diacrylate mesogens, A1. (C, i) Upon rotation of a magnet underneath the polyp, the structure undergoes a bending and rotational motion which when submerged in a fluid causes an effective flow. (C, ii) Upon UV light irradiation the polyp is made to close and blue light reversibly opens the structure.

Scientists from WMG at the University of Warwick, led by Eindhoven University of Technology in the Netherlands, developed a 1cm by 1cm wireless artificial aquatic polyp, which can remove contaminants from water. Apart from cleaning, this soft robot could be also used in medical diagnostic devices by aiding in picking up and transporting specific cells for analysis.

In the paper, ‘An artificial aquatic polyp that wirelessly attracts, grasps, and releases objects’ researchers demonstrate how their artificial aquatic polyp moves under the influence of a magnetic field, while the tentacles are triggered by light. A rotating magnetic field under the device drives a rotating motion of the artificial polyp’s stem. This motion results in the generation of an attractive flow which can guide suspended targets, such as oil droplets, towards the artificial polyp.

Once the targets are within reach, UV light can be used to activate the polyp’s tentacles, composed of photo-active liquid crystal polymers, which then bend towards the light enclosing the passing target in the polyp’s grasp. Target release is then possible through illumination with blue light.

Dr Harkamaljot Kandail, from WMG, University of Warwick was responsible for creating state of the art 3D simulations of the artificial aquatic polyps. The simulations are important to help understand and elucidate the stem and tentacles generate the flow fields that can attract the particles in the water.

The simulations were then used to optimise the shape of the tentacles so that the floating particles could be grabbed quickly and efficiently.

Dr Harkamaljot Kandail, from WMG, University of Warwick comments:

“Corals are such a valuable ecosystem in our oceans, I hope that the artificial aquatic polyps can be further developed to collect contaminant particles in real applications. The next stage for us to overcome before being able to do this is to successfully scale up the technology from laboratory to pilot scale. To do so we need to design an array of polyps which work harmoniously together where one polyp can capture the particle and pass it along for removal.”

Marina Pilz Da Cunha, from the Eindhoven University of Technology, Netherlands adds:

“The artificial aquatic polyp serves as a proof of concept to demonstrate the potential of actuator assemblies and serves as an inspiration for future devices. It exemplifies how motion of different stimuli-responsive polymers can be harnessed to perform wirelessly controlled tasks in an aquatic environment.”

Cyborg’ technology could enable new diagnostics, merger of humans and AI

Although true ‘cyborgs’ are science fiction, researchers are moving toward integrating electronics with the body. Such devices could monitor tumors or replace damaged tissues. But connecting electronics directly to human tissues in the body is a huge challenge. Today, a team is reporting new coatings for components that could help them more easily fit into this environment.

The researchers will present their results today at the American Chemical Society (ACS) Fall 2020 Virtual Meeting & Expo.

“We got the idea for this project because we were trying to interface rigid, inorganic microelectrodes with the brain, but brains are made out of organic, salty, live materials,” says David Martin, Ph.D., who led the study. “It wasn’t working well, so we thought there must be a better way.”

Traditional microelectronic materials, such as silicon, gold, stainless steel and iridium, cause scarring when implanted. For applications in muscle or brain tissue, electrical signals need to flow for them to operate properly, but scars interrupt this activity. The researchers reasoned that a coating could help.

“We started looking at organic electronic materials like conjugated polymers that were being used in non-biological devices,” says Martin, who is at the University of Delaware. “We found a chemically stable example that was sold commercially as an antistatic coating for electronic displays.” After testing, the researchers found that the polymer had the properties necessary for interfacing hardware and human tissue.

“These conjugated polymers are electrically active, but they are also ionically active,” Martin says. “Counter ions give them the charge they need so when they are in operation, both electrons and ions are moving around.” The polymer, known as poly(3,4-ethylenedioxythiophene) or PEDOT, dramatically improved the performance of medical implants by lowering their impedance two to three orders of magnitude, thus increasing signal quality and battery lifetime in patients.

Martin has since determined how to specialize the polymer, putting different functional groups on PEDOT. Adding a carboxylic acid, aldehyde or maleimide substituent to the ethylenedioxythiophene (EDOT) monomer gives the researchers the versatility to create polymers with a variety of functions.

“The maleimide is particularly powerful because we can do click chemistry substitutions to make functionalized polymers and biopolymers,” Martin says. Mixing unsubstituted monomer with the maleimide-substituted version results in a material with many locations where the team can attach peptides, antibodies or DNA. “Name your favorite biomolecule, and you can in principle make a PEDOT film that has whatever biofunctional group you might be interested in,” he says.

Most recently, Martin’s group created a PEDOT film with an antibody for vascular endothelial growth factor (VEGF) attached. VEGF stimulates blood vessel growth after injury, and tumors hijack this protein to increase their blood supply. The polymer that the team developed could act as a sensor to detect overexpression of VEGF and thus early stages of disease, among other potential applications.

Other functionalized polymers have neurotransmitters on them, and these films could help sense or treat brain or nervous system disorders. So far, the team has made a polymer with dopamine, which plays a role in addictive behaviors, as well as dopamine-functionalized variants of the EDOT monomer. Martin says these biological-synthetic hybrid materials might someday be useful in merging artificial intelligence with the human brain.

Ultimately, Martin says, his dream is to be able to tailor how these materials deposit on a surface and then to put them in tissue in a living organism. “The ability to do the polymerization in a controlled way inside a living organism would be fascinating.”

Genetic and functional insights into the fractal structure of the heart

by Hannah V. Meyer, Timothy J. W. Dawes, Marta Serrani, Wenjia Bai, Paweł Tokarczuk, Jiashen Cai, Antonio De Marvao, Albert Henry, R. Thomas Lumbers, et al. in Nature

Researchers used artificial intelligence and genetic analyses to examine the structure of the inner surface of the heart using 25,000 MRI scans. They found that the complex network of muscle fibers lining the inside of the heart, called trabeculae, allows blood to flow more efficiently and can influence the risk of heart failure. The study answers very old questions in basic human physiology and leads to new directions for understanding heart diseases.

In humans, the heart is the first functional organ to develop and starts beating spontaneously only four weeks after conception. Early in development, the heart grows an intricate network of muscle fibers — called trabeculae — that form geometric patterns on the heart’s inner surface. These are thought to help oxygenate the developing heart, but their function in adults has remained an unsolved puzzle since the 16th century.

“Our work significantly advanced our understanding of the importance of myocardial trabeculae,” explains Hannah Meyer, a Cold Spring Harbor Laboratory Fellow. “Perhaps even more importantly, we also showed the value of a truly multidisciplinary team of researchers. Only the combination of genetics, clinical research, and bioengineering led us to discover the unexpected role of myocardial trabeculae in the function of the adult heart.”

To understand the roles and development of trabeculae, an international team of researchers used artificial intelligence to analyse 25,000 magnetic resonance imaging (MRI) scans of the heart, along with associated heart morphology and genetic data. The study reveals how trabeculae work and develop, and how their shape can influence heart disease. UK Biobank has made the study data openly available.

Leonardo da Vinci was the first to sketch trabeculae and their snowflake-like fractal patterns in the 16th century. He speculated that they warm the blood as it flows through the heart, but their true importance has not been recognized until now.

“Our findings answer very old questions in basic human biology. As large-scale genetic analyses and artificial intelligence progress, we’re rebooting our understanding of physiology to an unprecedented scale,” says Ewan Birney, deputy director general of EMBL.

The research suggests that the rough surface of the heart ventricles allows blood to flow more efficiently during each heartbeat, just like the dimples on a golf ball reduce air resistance and help the ball travel further.

The study also highlights six regions in human DNA that affect how the fractal patterns in these muscle fibers develop. Intriguingly, the researchers found that two of these regions also regulate branching of nerve cells, suggesting a similar mechanism may be at work in the developing brain.

The researchers discovered that the shape of trabeculae affects the performance of the heart, suggesting a potential link to heart disease. To confirm this, they analyzed genetic data from 50,000 patients and found that different fractal patterns in these muscle fibers affected the risk of developing heart failure. Nearly five million Americans suffer from congestive heart failure.

Further research on trabeculae may help scientists better understand how common heart diseases develop and explore new approaches to treatment.

“Leonardo da Vinci sketched these intricate muscles inside the heart 500 years ago, and it’s only now that we’re beginning to understand how important they are to human health. This work offers an exciting new direction for research into heart failure,” says Declan O’Regan, clinical scientist and consultant radiologist at the MRC London Institute of Medical Sciences. This project included collaborators at Cold Spring Harbor Laboratory, EMBL’s European Bioinformatics Institute (EMBL-EBI), the MRC London Institute of Medical Sciences, Heidelberg University, and the Politecnico di Milano.

Sounds of action: Using ears, not just eyes, improves robot perception: Carnegie Mellon builds dataset capturing interaction of sound, action, vision

People rarely use just one sense to understand the world, but robots usually only rely on vision and, increasingly, touch. Researchers find that robot perception could improve markedly by adding another sense: hearing.

In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.

“A lot of preliminary work in other fields indicated that sound could be useful, but it wasn’t clear how useful it would be in robotics,” said Lerrel Pinto, who recently earned his Ph.D. in robotics at CMU and will join the faculty of New York University this fall. He and his colleagues found the performance rate was quite high, with robots that used sound successfully classifying objects 76 percent of the time.

The results were so encouraging, he added, that it might prove useful to equip future robots with instrumented canes, enabling them to tap on objects they want to identify.

The researchers presented their findings last month during the virtual Robotics Science and Systems conference. Other team members included Abhinav Gupta, associate professor of robotics, and Dhiraj Gandhi, a former master’s student who is now a research scientist at Facebook Artificial Intelligence Research’s Pittsburgh lab.

To perform their study, the researchers created a large dataset, simultaneously recording video and audio of 60 common objects — such as toy blocks, hand tools, shoes, apples and tennis balls — as they slid or rolled around a tray and crashed into its sides. They have since released this dataset, cataloging 15,000 interactions, for use by other researchers.

The team captured these interactions using an experimental apparatus they called Tilt-Bot — a square tray attached to the arm of a Sawyer robot. It was an efficient way to build a large dataset; they could place an object in the tray and let Sawyer spend a few hours moving the tray in random directions with varying levels of tilt as cameras and microphones recorded each action.

They also collected some data beyond the tray, using Sawyer to push objects on a surface.

Though the size of this dataset is unprecedented, other researchers have also studied how intelligent agents can glean information from sound. For instance, Oliver Kroemer, assistant professor of robotics, led research into using sound to estimate the amount of granular materials, such as rice or pasta, by shaking a container, or estimating the flow of those materials from a scoop.

Pinto said the usefulness of sound for robots was therefore not surprising, though he and the others were surprised at just how useful it proved to be. They found, for instance, that a robot could use what it learned about the sound of one set of objects to make predictions about the physical properties of previously unseen objects.

“I think what was really exciting was that when it failed, it would fail on things you expect it to fail on,” he said. For instance, a robot couldn’t use sound to tell the difference between a red block or a green block. “But if it was a different object, such as a block versus a cup, it could figure that out.”

SCOTI: Science Captioning of Terrain Images for data prioritization and local image search

by Dicong Qiu, Brandon Rothrock, Tanvir Islam, Annie K. Didier, Vivian Z. Sun, Chris A. Mattmann, Masahiro Ono in Planetary and Space Science

NASA JPL are developing autonomous capabilities that could allow future Mars rovers to go farther, faster and do more science. Training machine learning models on the Maverick2, their team developed and optimized models for Drive-By Science and Energy-Optimal Autonomous Navigation.

Four generations of rovers have traversed the red planet gathering scientific data, sending back evocative photographs, and surviving incredibly harsh conditions — all using on-board computers less powerful than an iPhone 1. The latest rover, Perseverance, was launched on July 30, 2020, and engineers are already dreaming of a future generation of rovers.

While a major achievement, these missions have only scratched the surface (literally and figuratively) of the planet and its geology, geography, and atmosphere.

“The surface area of Mars is approximately the same as the total area of the land on Earth,” said Masahiro (Hiro) Ono, group lead of the Robotic Surface Mobility Group at the NASA Jet Propulsion Laboratory (JPL) — which has led all the Mars rover missions — and one of the researchers who developed the software that allows the current rover to operate.

“Imagine, you’re an alien and you know almost nothing about Earth, and you land on seven or eight points on Earth and drive a few hundred kilometers. Does that alien species know enough about Earth?” Ono asked. “No. If we want to represent the huge diversity of Mars we’ll need more measurements on the ground, and the key is substantially extended distance, hopefully covering thousands of miles.”

Travelling across Mars’ diverse, treacherous terrain with limited computing power and a restricted energy diet — only as much sun as the rover can capture and convert to power in a single Martian day, or sol — is a huge challenge.

The first rover, Sojourner, covered 330 feet over 91 sols; the second, Spirit, travelled 4.8 miles in about five years; Opportunity, travelled 28 miles over 15 years; and Curiosity has travelled more than 12 miles since it landed in 2012.

“Our team is working on Mars robot autonomy to make future rovers more intelligent, to enhance safety, to improve productivity, and in particular to drive faster and farther,” Ono said.

APPLD: Adaptive Planner Parameter Learning From Demonstration

by Xuesu Xiao, Bo Liu, Garrett Warnell, Jonathan Fink, Peter Stone in IEEE Robotics and Automation Letters

Researchers have designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive.

At the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground vehicle to improve its existing navigation systems by watching a human drive. The team tested its approach — called adaptive planner parameter learning from demonstration, or APPLD — on one of the Army’s experimental autonomous ground vehicles.

“Using approaches like APPLD, current Soldiers in existing training facilities will be able to contribute to improvements in autonomous systems simply by operating their vehicles as normal,” said Army researcher Dr. Garrett Warnell. “Techniques like these will be an important contribution to the Army’s plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments.”

The researchers fused machine learning from demonstration algorithms and more classical autonomous navigation systems. Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration. This paradigm allows for the deployed system to retain all the benefits of classical navigation systems — such as optimality, explainability and safety — while also allowing the system to be flexible and adaptable to new environments, Warnell said.

“A single demonstration of human driving, provided using an everyday Xbox wireless controller, allowed APPLD to learn how to tune the vehicle’s existing autonomous navigation system differently depending on the particular local environment,” Warnell said. “For example, when in a tight corridor, the human driver slowed down and drove carefully. After observing this behavior, the autonomous system learned to also reduce its maximum speed and increase its computation budget in similar environments. This ultimately allowed the vehicle to successfully navigate autonomously in other tight corridors where it had previously failed.”

This research is part of the Army’s Open Campus initiative, through which Army scientists in Texas collaborate with academic partners at UT Austin.

“APPLD is yet another example of a growing stream of research results that has been facilitated by the unique collaboration arrangement between UT Austin and the Army Research Lab,” said Dr. Peter Stone, professor and chair of the Robotics Consortium at UT Austin. “By having Dr. Warnell embedded at UT Austin full-time, we are able to quickly identify and tackle research problems that are both cutting-edge scientific advances and also immediately relevant to the Army.”

The team’s experiments showed that, after training, the APPLD system was able to navigate the test environments more quickly and with fewer failures than with the classical system. Additionally, the trained APPLD system often navigated the environment faster than the human who trained it. The peer-reviewed journal, IEEE Robotics and Automation Letters, published the team’s work: APPLD: Adaptive Planner Parameter Learning From Demonstration .

“From a machine learning perspective, APPLD contrasts with so called end-to-end learning systems that attempt to learn the entire navigation system from scratch,” Stone said. “These approaches tend to require a lot of data and may lead to behaviors that are neither safe nor robust. APPLD leverages the parts of the control system that have been carefully engineered, while focusing its machine learning effort on the parameter tuning process, which is often done based on a single person’s intuition.”

APPLD represents a new paradigm in which people without expert-level knowledge in robotics can help train and improve autonomous vehicle navigation in a variety of environments. Rather than small teams of engineers trying to manually tune navigation systems in a small number of test environments, a virtually unlimited number of users would be able to provide the system the data it needs to tune itself to an unlimited number of environments.

“Current autonomous navigation systems typically must be re-tuned by hand for each new deployment environment,” said Army researcher Dr. Jonathan Fink. “This process is extremely difficult — it must be done by someone with extensive training in robotics, and it requires a lot of trial and error until the right systems settings can be found. In contrast, APPLD tunes the system automatically by watching a human drive the system — something that anyone can do if they have experience with a video game controller. During deployment, APPLD also allows the system to re-tune itself in real-time as the environment changes.”

The Army’s focus on modernizing the Next Generation Combat Vehicle includes designing both optionally manned fighting vehicles and robotic combat vehicles that can navigate autonomously in off-road deployment environments. While Soldiers can navigate these environments driving current combat vehicles, the environments remain too challenging for state-of-the-art autonomous navigation systems. APPLD and similar approaches provide a new potential way for the Army to improve existing autonomous navigation capabilities.

Evidence of flat bands and correlated states in buckled graphene superlattices

by Jinhai Mao, Slaviša P. Milovanović, Miša Anđelković, Xinyuan Lai, Yang Cao, Kenji Watanabe, Takashi Taniguchi, Lucian Covaci, Francois M. Peeters, Andre K. Geim, Yuhang Jiang, Eva Y. Andrei in Nature

Graphene buckles when cooled while attached to a flat surface, resulting in pucker patterns that could benefit the search for novel quantum materials and superconductors, according to new research.

Quantum materials host strongly interacting electrons with special properties, such as entangled trajectories, that could provide building blocks for super-fast quantum computers. They also can become superconductors that could slash energy consumption by making power transmission and electronic devices more efficient.

“The buckling we discovered in graphene mimics the effect of colossally large magnetic fields that are unattainable with today’s magnet technologies, leading to dramatic changes in the material’s electronic properties,” said lead author Eva Y. Andrei, Board of Governors professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick. “Buckling of stiff thin films like graphene laminated on flexible materials is gaining ground as a platform for stretchable electronics with many important applications, including eye-like digital cameras, energy harvesting, skin sensors, health monitoring devices like tiny robots and intelligent surgical gloves. Our discovery opens the way to the development of devices for controlling nano-robots that may one day play a role in biological diagnostics and tissue repair.”

The scientists studied buckled graphene crystals whose properties change radically when they’re cooled, creating essentially new materials with electrons that slow down, become aware of each other and interact strongly, enabling the emergence of fascinating phenomena such as superconductivity and magnetism, according to Andrei.

Using high-tech imaging and computer simulations, the scientists showed that graphene placed on a flat surface made of niobium diselenide, buckles when cooled to 4 degrees above absolute zero. To the electrons in graphene, the mountain and valley landscape created by the buckling appears as gigantic magnetic fields. These pseudo-magnetic fields are an electronic illusion, but they act as real magnetic fields, according to Andrei.

“Our research demonstrates that buckling in 2D materials can dramatically alter their electronic properties,” she said.

The next steps include developing ways to engineer buckled 2D materials with novel electronic and mechanical properties that could be beneficial in nano-robotics and quantum computing, according to Andrei.

The first author is Jinhai Mao, formerly a research associate in the Department of Physics and Astronomy and now a researcher at the University of Chinese Academy of Sciences. Rutgers co-authors include doctoral student Xinyuan Lai and a former post-doctoral associate, Yuhang Jiang, who is now a researcher at the University of Chinese Academy of Sciences. Slaviša Milovanović, who led the theory effort, is a graduate student working with professors Lucian Covaci and Francois Peeters at the Universiteit Antwerpen. Scientists at the University of Manchester and the Institute of material Science in Tsukuba Japan contributed to the study.

NIST’s SAMURAI measures 5G communications channels precisely

Engineers have developed a flexible, portable measurement system to support design and repeatable laboratory testing of fifth-generation (5G) wireless communications devices with unprecedented accuracy across a wide range of signal frequencies and scenarios.

The system is called SAMURAI, short for Synthetic Aperture Measurements of Uncertainty in Angle of Incidence. The system is the first to offer 5G wireless measurements with accuracy that can be traced to fundamental physical standards — a key feature because even tiny errors can produce misleading results. SAMURAI is also small enough to be transported to field tests.

Mobile devices such as cellphones, consumer Wi-Fi devices and public-safety radios now mostly operate at electromagnetic frequencies below 3 gigahertz (GHz) with antennas that radiate equally in all directions. Experts predict 5G technologies could boost data rates a thousandfold by using higher, “millimeter-wave” frequencies above 24 GHz and highly directional, actively changing antenna patterns. Such active antenna arrays help to overcome losses of these higher-frequency signals during transmission. 5G systems also send signals over multiple paths simultaneously — so-called spatial channels — to increase speed and overcome interference.

Many instruments can measure some aspects of directional 5G device and channel performance. But most focus on collecting quick snapshots over a limited frequency range to provide a general overview of a channel, whereas SAMURAI provides a detailed portrait. In addition, many instruments are so physically large that they can distort millimeter-wave signal transmissions and reception.

Described at a conference on Aug. 7, SAMURAI is expected to help resolve many unanswered questions surrounding 5G’s use of active antennas, such as what happens when high data rates are transmitted across multiple channels at once. The system will help improve theory, hardware and analysis techniques to provide accurate channel models and efficient networks.

“SAMURAI provides a cost-effective way to study many millimeter-wave measurement issues, so the technique will be accessible to academic labs as well as instrumentation metrology labs,” NIST electronics engineer Kate Remley said. “Because of its traceability to standards, users can have confidence in the measurements. The technique will allow better antenna design and performance verification, and support network design.”

SAMURAI measures signals across a wide frequency range, currently up to 50 GHz, extending to 75 GHz in the coming year. The system got its name because it measures received signals at many points over a grid or virtual “synthetic aperture.” This allows reconstruction of incoming energy in three dimensions — including the angles of the arriving signals — which is affected by many factors, such as how the signal’s electric field reflects off of objects in the transmission path.

SAMURAI can be applied to a variety of tasks from verifying the performance of wireless devices with active antennas to measuring reflective channels in environments where metallic objects scatter signals. NIST researchers are currently using SAMURAI to develop methods for testing industrial Internet of Things devices at millimeter-wave frequencies.

The basic components are two antennas to transmit and receive signals, instrumentation with precise timing synchronization to generate radio transmissions and analyze reception, and a six-axis robotic arm that positions the receive antenna to the grid points that form the synthetic aperture. The robot ensures accurate and repeatable antenna positions and traces out a variety of reception patterns in 3D space, such as cylindrical and hemispherical shapes. A variety of small metallic objects such as flat plates and cylinders can be placed in the test setup to represent buildings and other real-world impediments to signal transmission. To improve positional accuracy, a system of 10 cameras is also used to track the antennas and measure the locations of objects in the channel that scatter signals.

The system is typically attached to an optical table measuring 5 feet by 14 feet (1.5 meters by 4.3 meters). But the equipment is portable enough to be used in mobile field tests and moved to other laboratory settings. Wireless communications research requires a mix of lab tests — which are well controlled to help isolate specific effects and verify system performance — and field tests, which capture the range of realistic conditions.

Measurements can require hours to complete, so all aspects of the (stationary) channel are recorded for later analysis. These values include environmental factors such as temperature and humidity, location of scattering objects, and drift in accuracy of the measurement system.

The NIST team developed SAMURAI with collaborators from the Colorado School of Mines in Golden, Colorado. Researchers have verified the basic operation and are now incorporating uncertainty due to unwanted reflections from the robotic arm, position error and antenna patterns into the measurements.

Videos

The Multi-robot Systems Group at FEE-CTU in Prague is working on an autonomous drone that detects fires and the shoots an extinguisher capsule at them.

The experiment with HEAP (Hydraulic Excavator for Autonomous Purposes) demonstrates the latest research in on-site and mobile digital fabrication with found materials. The embankment prototype in natural granular material was achieved using state of the art design and construction processes in mapping, modelling, planning and control. The entire process of building the embankment was fully autonomous. An operator was only present in the cabin for safety purposes.

The Simulation, Systems Optimization and Robotics Group (SIM) of Technische Universität Darmstadt’s Department of Computer Science conducts research on cooperating autonomous mobile robots, biologically inspired robots and numerical optimization and control methods.

MOFLIN is an AI Pet created from a totally new concept. It possesses emotional capabilities that evolve like living animals. With its warm soft fur, cute sounds, and adorable movement, you’d want to love it forever.

This video is only robotics-adjacent, but it has applications for robotic insects. With a high-speed tracking system, we can now follow insects as they jump and fly, and watch how clumsy (but effective) they are at it.

Suzumori Endo Lab, Tokyo Tech has developed self-excited pneumatic actuators that can be integrally molded by a 3D printer. These actuators use the “automatic flow path switching mechanism” we have devised.

Upcoming events

CLAWAR 2020 — August 24–26, 2020 — [Virtual Conference]

ICUAS 2020 — September 1–4, 2020 — Athens, Greece

ICRES 2020 — September 28–29, 2020 — Taipei, Taiwan

IROS 2020 — October 25–29, 2020 — Las Vegas, Nevada

ICSR 2020 — November 14–16, 2020 — Golden, Colorado

Subscribe to Paradigm!

Medium. Twitter. Telegram. Reddit.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--