RT/ Tiny robotic crab is smallest-ever remote-controlled walking robot

Paradigm
Paradigm
Published in
27 min readMay 31, 2022

Robotics biweekly vol.51, 17th May — 31st

TL;DR

  • Engineers have developed the smallest-ever remote-controlled walking robot — and it comes in the form of a tiny, adorable peekytoe crab. Just a half-millimeter wide, the tiny crabs can bend, twist, crawl, walk, turn and even jump. Although the research is exploratory at this point, the researchers believe their technology might bring the field closer to realizing micro-sized robots that can perform practical tasks inside tightly confined spaces.
  • New research demonstrates that organic crystals, a new class of smart engineering materials, can serve as efficient and sustainable energy conversion materials for advanced technologies such as robotics and electronics.
  • Mastering control over the dynamic interplay among optical, chemical and mechanical behavior in single-material, liquid crystalline elastomers, results in microposts that combine bending, twisting and turning into complex dances. The advancement could contribute toward further development of soft robotics and other devices.
  • Researchers have developed a trajectory-planning system for autonomous vehicles that enables them to travel from a starting point to a target location safely, even when there are many different uncertainties in the environment, such as unknown variations in the shapes, sizes, and locations of obstacles.
  • Researchers have developed soft robots that are capable of navigating complex environments, such as mazes, without input from humans or computer software.
  • Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.
  • Engineers have developed a low cost, low power technology to help robots accurately map their way indoors, even in poor lighting and without recognizable landmarks or features. The technology uses WiFi signals, instead of light, to help the robot ‘see’ where it’s going.
  • Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. Computer scientists have now developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation.
  • Recently developed ‘smart skin’ is very similar to human skin. It senses pressure, humidity and temperature simultaneously and produces electronic signals. More sensitive robots or more intelligent prostheses are thus conceivable.
  • Veterinarians and researchers have developed a technique to predict leptospirosis in dogs through artificial intelligence. Leptospirosis is a life-threatening bacterial disease dogs can get from drinking contaminated water.
  • Check out robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Submillimeter-scale multimaterial terrestrial robots

by Mengdi Han, Xiaogang Guo, Xuexian Chen, Cunman Liang, et al in Science Robotics

Northwestern University engineers have developed the smallest-ever remote-controlled walking robot — and it comes in the form of a tiny, adorable peekytoe crab.

Just a half-millimeter wide, the tiny crabs can bend, twist, crawl, walk, turn and even jump. The researchers also developed millimeter-sized robots resembling inchworms, crickets and beetles. Although the research is exploratory at this point, the researchers believe their technology might bring the field closer to realizing micro-sized robots that can perform practical tasks inside tightly confined spaces.

Smaller than a flea, tiny robotic crab sits next to the eye of a sewing needle.

“Robotics is an exciting field of research, and the development of microscale robots is a fun topic for academic exploration,” said John A. Rogers, who led the experimental work. “You might imagine micro-robots as agents to repair or assemble small structures or machines in industry or as surgical assistants to clear clogged arteries, to stop internal bleeding or to eliminate cancerous tumors — all in minimally invasive procedures.”

“Our technology enables a variety of controlled motion modalities and can walk with an average speed of half its body length per second,” added Yonggang Huang, who led the theoretical work. “This is very challenging to achieve at such small scales for terrestrial robots.”

A pioneer in bioelectronics, Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern’s McCormick School of Engineering and Feinberg School of Medicine and the director of the Querrey Simpson Institute for Bioelectronics (QSIB). Huang is the Jan and Marcia Achenbach Professor of Mechanical Engineering and Civil and Environmental Engineering at McCormick and key member of QSIB.

Smaller than a flea, the crab is not powered by complex hardware, hydraulics or electricity. Instead, its power lies within the elastic resilience of its body. To construct the robot, the researchers used a shape-memory alloy material that transforms to its “remembered” shape when heated. In this case, the researchers used a scanned laser beam to rapidly heat the robot at different targeted locations across its body. A thin coating of glass elastically returns that corresponding part of structure to its deformed shape upon cooling.

Schematic illustration of the fabrication procedures for the 3D crab. )A) Si wafer with Cr and SMA. (B) Pattern the SMA. © Spin coat and define the PI pattern. (D) Deposit and pattern SiO2. (E) Undercut Cr and transfer the 2D precursors to a PDMS stamp. (F) Transfer the 2D precursors to a water soluble tape. (G) Transfer the 2D precursors to a prestretched elastomer. (H) Transfer the PDMS block. (I) Compressive buckling. (J) Conformally deposit the SiO2. (K) Dissolve the silicone.

As the robot changes from one phase to another — deformed to remembered shape and back again — it creates locomotion. Not only does the laser remotely control the robot to activate it, the laser scanning direction also determines the robot’s walking direction. Scanning from left to right, for example, causes the robot to move from right to left.

“Because these structures are so tiny, the rate of cooling is very fast,” Rogers explained. “In fact, reducing the sizes of these robots allows them to run faster.”

2D layout of the crab structure.

To manufacture such a tiny critter, Rogers and Huang turned to a technique they introduced eight years ago — a pop-up assembly method inspired by a child’s pop-up book. First, the team fabricated precursors to the walking crab structures in flat, planar geometries. Then, they bonded these precursors onto a slightly stretched rubber substrate. When the stretched substrate is relaxed, a controlled buckling process occurs that causes the crab to “pop up” into precisely defined three-dimensional forms.

Optical images of various robots constructed in SMA and PI. Scale bars, 500 μm.

With this manufacturing method, the Northwestern team could develop robots of various shapes and sizes. So why a peekytoe crab? We can thank Rogers’ and Huang’s students for that.

“With these assembly techniques and materials concepts, we can build walking robots with almost any sizes or 3D shapes,” Rogers said. “But the students felt inspired and amused by the sideways crawling motions of tiny crabs. It was a creative whim.”

Exceptionally high work density of a ferroelectric dynamic organic crystal around room temperature

by Durga Prasad Karothu, Rodrigo Ferreira, Ghada Dushaq, Ejaz Ahmed, Luca Catalano, Jad Mahmoud Halabi, Zainab Alhaddad, Ibrahim Tahir, Liang Li, Sharmarke Mohamed, Mahmoud Rasras, Panče Naumov in Nature Communications

New research by a team of researchers at the NYU Abu Dhabi (NYUAD) Smart Materials Lab demonstrates that organic crystals, a new class of smart engineering materials, can serve as efficient and sustainable energy conversion materials for advanced technologies such as robotics and electronics.

While organic crystals were previously thought to be fragile, the NYUAD researchers have discovered that some organic crystals are mechanically very robust. They developed a material that establishes a new world record for its ability to switch between different shapes by expansion or contraction over half of its length, without losing its perfectly-ordered structure.

Crystal habit and structure, and phase transition between forms I and II of GN.

In the study the team, led by NYUAD Professor of Chemistry Panče Naumov, presents the process of observing how the organic crystalline material reacted to different temperatures. The researchers found that the organic crystals were able to reversibly change shape in a similar manner to plastics and rubber. Specifically, this material could expand and contract over half of its length (51 percent) repeatedly, over thousands of cycles, without any deterioration. It was also able to both expand and contract at room temperature, as opposed to other materials that require a higher temperature to transform, creating higher energy costs for operation.

Unlike traditional materials that are silicon- or silica-based, and inevitably stiff, heavy and brittle, the materials that will be used for future electronics will be soft and organic in nature. These advanced technologies require materials that are lightweight, resilient to damage, efficient in performance, and also have added qualities such as mechanical flexibility and ability to operate sustainably, with minimal consumption of energy. The results of this study have demonstrated, for the first time, that certain organic crystalline materials meet the needs of these technologies, and can be used in applications such as soft robotics, artificial muscles, organic optics, and organic electronics (electronics created solely from organic materials).

“This latest discovery from the Smart Materials Lab at NYUAD builds on a series of our previous discoveries about the untapped potential of this new class of materials, which includes adaptive crystals, self-healing crystals, and organic crystalline materials with shape memory,” said Naumov. “Our work has shown that organic crystals can not only meet the needs of the emerging technologies, but in some cases can also surpass the levels of efficiency and sustainability of other, more common materials.”

Self-regulated non-reciprocal motions in single-material microstructures

by Shucong Li, Michael M. Lerch, James T. Waters, Bolei Deng, Reese S. Martens, Yuxing Yao, Do Yoon Kim, Katia Bertoldi, Alison Grinthal, Anna C. Balazs, Joanna Aizenberg in Nature

When humans twist and turn it is the result of complex internal functions: the body’s nervous system signals our intentions; the musculoskeletal system supports the motion; and the digestive system generates the energy to power the move. The body seamlessly integrates these activities without our even being aware that coordinated, dynamic processes are taking place. Reproducing similar, integrated functioning in a single synthetic material has proven difficult -few one-component materials naturally encompass the spatial and temporal coordination needed to mimic the spontaneity and dexterity of biological behavior.

However, through a combination of experiments and modeling, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and University of Pittsburgh Swanson School of Engineering created a single-material, self-regulating system that controllably twists and bends to undergo biomimetic motion.

Senior author is Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS. Inspired by experiments performed in the Aizenberg lab, contributing authors at the University of Pittsburgh, Anna Balazs and James Waters, developed the theoretical and computational models to design liquid crystal elastomers (LCEs) that imitate the seamless coupling of dynamic processes observed in living systems.

“Our movements occur spontaneously because the human body contains several interconnected structures, and the performance of each structure is highly coordinated in space and time, allowing one event to instigate the behavior in another part of the body,” explained Balazs, Distinguished Professor of Chemical Engineering and the John A. Swanson Chair of Engineering. “For example, the firing of neurons in the spine triggers a signal that causes a particular muscle to contract; the muscle expands when the neurons have stopped firing, allowing the body to return to its relaxed shape. If we could replicate this level of interlocking, multi-functionality in a synthetic material, we could ultimately devise effective self-regulating, autonomously operating devices.”

The V-shape is magnetically programmed with a uniform orientation of the nematic director, along the horizontal axis in the movie. However, the different orientation of the arms of the “V” relative to this axis cause them to twist in opposite directions in response to ultraviolet light.

The LCE material used in this collaborative Harvard- Pitt study was composed of long polymer chains with rod-like groups (mesogens) attached via side branches; photo-responsive crosslinkers were used to make the LCE responsive to UV light. The material was molded into a micron-scale posts anchored to an underlying surface. The Harvard team then demonstrated an extremely diverse set of complex motions that the microstructures can display when exposed to light.

“The coupling among microscopic units — the polymers, side chains, meogens and crosslinkers — within this material could remind you of the interlocking of different components within a human body” said Balazs, “suggesting that with the right trigger, the LCE might display rich spatiotemporal behavior.”

To devise the most effective triggers, Waters formulated a model that describes the simultaneous optical, chemical and mechanical phenomena occurring over the range of length and time scales that characterize the LCE. The simulations also provided an effective means of uncovering and visualizing the complex interactions within this responsive opto-chemo-mechanical system.

“Our model can accurately predict the spatial and temporal evolution of the posts and reveal how different behaviors can be triggered by varying the materials’ properties and features of the imposed light,” Waters said, further noting “The model serves as a particularly useful predictive tool when the complexity of the system is increased by, for example, introducing multiple interacting posts, which can be arranged in an essentially infinite number of ways.”

According to Balazs, these combined modeling and experimental studies pave the way for creating the next generation of light-responsive, soft machines or robots that begin to exhibit life-like autonomy. “Light is a particularly useful stimulus for activating these materials since the light source can be easily moved to instigate motion in different parts of the post or collection of posts,” she said.

In future studies, Waters and Balazs will investigate how arrays of posts and posts with different geometries behave under the influence of multiple or more localized beams of light. Preliminary results indicate that in the presence of multiple light beams, the LCE posts can mimic the movement and flexibility of fingers, suggesting new routes for designing soft robotic hands that can be manipulated with light.

“The vast design space for individual and collective motions is potentially transformative for soft robotics, micro-walkers, sensors, and robust information encryption systems,” said Aizenberg.

Non-Gaussian Risk Bounded Trajectory Optimization for Stochastic Nonlinear Systems in Uncertain Environments

by Weiqiao Han, Ashkan Jasour, Brian Williams in arXiv

An autonomous spacecraft exploring the far-flung regions of the universe descends through the atmosphere of a remote exoplanet. The vehicle, and the researchers who programmed it, don’t know much about this environment.

With so much uncertainty, how can the spacecraft plot a trajectory that will keep it from being squashed by some randomly moving obstacle or blown off course by sudden, gale-force winds? MIT researchers have developed a technique that could help this spacecraft land safely. Their approach can enable an autonomous vehicle to plot a provably safe trajectory in highly uncertain situations where there are multiple uncertainties regarding environmental conditions and objects the vehicle could collide with.

The technique could help a vehicle find a safe course around obstacles that move in random ways and change their shape over time. It plots a safe trajectory to a targeted region even when the vehicle’s starting point is not precisely known and when it is unclear exactly how the vehicle will move due to environmental disturbances like wind, ocean currents, or rough terrain.

This is the first technique to address the problem of trajectory planning with many simultaneous uncertainties and complex safety constraints, says co-lead author Weiqiao Han, a graduate student in the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“Future robotic space missions need risk-aware autonomy to explore remote and extreme worlds for which only highly uncertain prior knowledge exists. In order to achieve this, trajectory-planning algorithms need to reason about uncertainties and deal with complex uncertain models and safety constraints,” adds co-lead author Ashkan Jasour, a former CSAIL research scientist who now works on robotics systems at the NASA Jet Propulsion Laboratory. Joining Han and Jasour on the paper is senior author Brian Williams, professor of aeronautics and astronautics and a member of CSAIL.

Example IV-B: Time steps t = 0:3; 0:4; 0:7, and 1:0.

Because this trajectory planning problem is so complex, other methods for finding a safe path forward make assumptions about the vehicle, obstacles, and environment. These methods are too simplistic to apply in most real-world settings, and therefore they cannot guarantee their trajectories are safe in the presence of complex uncertain safety constraints, Jasour says.

“This uncertainty might come from the randomness of nature or even from the inaccuracy in the perception system of the autonomous vehicle,” Han adds.

Instead of guessing the exact environmental conditions and locations of obstacles, the algorithm they developed reasons about the probability of observing different environmental conditions and obstacles at different locations. It would make these computations using a map or images of the environment from the robot’s perception system.

Using this approach, their algorithms formulate trajectory planning as a probabilistic optimization problem. This is a mathematical programming framework that allows the robot to achieve planning objectives, such as maximizing velocity or minimizing fuel consumption, while considering safety constraints, such as avoiding obstacles. The probabilistic algorithms they developed reason about risk, which is the probability of not achieving those safety constraints and planning objectives, Jasour says.

But because the problem involves different uncertain models and constraints, from the location and shape of each obstacle to the starting location and behavior of the robot, this probabilistic optimization is too complex to solve with standard methods. The researchers used higher-order statistics of probability distributions of the uncertainties to convert that probabilistic optimization into a more straightforward, simpler deterministic optimization problem that can be solved efficiently with existing off-the-shelf solvers.

“Our challenge was how to reduce the size of the optimization and consider more practical constraints to make it work. Going from good theory to good application took a lot of effort,” Jasour says.

The optimization solver generates a risk-bounded trajectory, which means that if the robot follows the path, the probability it will collide with any obstacle is not greater than a certain threshold, like 1 percent. From this, they obtain a sequence of control inputs that can steer the vehicle safely to its target region.

Example IV-C: Time steps 0.7, 0.8, 0.9, and 1.

They evaluated the technique using several simulated navigation scenarios. In one, they modeled an underwater vehicle charting a course from some uncertain position, around a number of strangely shaped obstacles, to a goal region. It was able to safely reach the goal at least 99 percent of the time. They also used it to map a safe trajectory for an aerial vehicle that avoided several 3D flying objects that have uncertain sizes and positions and could move over time, while in the presence of strong winds that affected its motion. Using their system, the aircraft reached its goal region with high probability.

Depending on the complexity of the environment, the algorithms took between a few seconds and a few minutes to develop a safe trajectory.

The researchers are now working on more efficient processes that would reduce the runtime significantly, which could allow them to get closer to real-time planning scenarios, Jasour says. Han is also developing feedback controllers to apply to the system, which would help the vehicle stick closer to its planned trajectory even if it deviates at times from the optimal course. He is also working on a hardware implementation that would enable the researchers to demonstrate their technique in a real robot.

Twisting for soft intelligent autonomous robot in unstructured environments

by Yao Zhao, Yinding Chi, Yaoye Hong, Yanbin Li, Shu Yang, Jie Yin in Proceedings of the National Academy of Sciences

Researchers from North Carolina State University and the University of Pennsylvania have developed soft robots that are capable of navigating complex environments, such as mazes, without input from humans or computer software.

“These soft robots demonstrate a concept called ‘physical intelligence,’ meaning that structural design and smart materials are what allow the soft robot to navigate various situations, as opposed to computational intelligence,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State.

The soft robots are made of liquid crystal elastomers in the shape of a twisted ribbon, resembling translucent rotini. When you place the ribbon on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion in the ribbon. And the warmer the surface, the faster it rolls.

“This has been done before with smooth-sided rods, but that shape has a drawback — when it encounters an object, it simply spins in place,” says Yin. “The soft robot we’ve made in a twisted ribbon shape is capable of negotiating these obstacles with no human or computer intervention whatsoever.”

The ribbon robot does this in two ways. First, if one end of the ribbon encounters an object, the ribbon rotates slightly to get around the obstacle. Second, if the central part of the robot encounters an object, it “snaps.” The snap is a rapid release of stored deformation energy that causes the ribbon to jump slightly and reorient itself before landing. The ribbon may need to snap more than once before finding an orientation that allows is to negotiate the obstacle, but ultimately it always finds a clear path forward.

“In this sense, it’s much like the robotic vacuums that many people use in their homes,” Yin says. “Except the soft robot we’ve created draws energy from its environment and operates without any computer programming.”

“The two actions, rotating and snapping, that allow the robot to negotiate obstacles operate on a gradient,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “The most powerful snap occurs if an object touches the center of the ribbon. But the ribbon will still snap if an object touches the ribbon away from the center, it’s just less powerful. And the further you are from the center, the less pronounced the snap, until you reach the last fifth of the ribbon’s length, which does not produce a snap at all.”

The researchers conducted multiple experiments demonstrating that the ribbon-like soft robot is capable of navigating a variety of maze-like environments. The researchers also demonstrated that the soft robots would work well in desert environments, showing they were capable of climbing and descending slopes of loose sand.

“This is interesting, and fun to look at, but more importantly it provides new insights into how we can design soft robots that are capable of harvesting heat energy from natural environments and autonomously negotiating complex, unstructured settings such as roads and harsh deserts.” Yin says.

Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse

by Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im, Keon Jae Lee in Nature Communications

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Neuromorphic memory device consisting of bottom volatile and top nonvolatile memory layers emulating neuronal and synaptic properties, respectively.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency.

Retraining operation in the neuromorphic device array. a) Schematic graph showing the retraining effect. b) Scanning electron microscope image of the neuromorphic device array. c) Training pattern “F” for the retraining test. d) Evolution of the memory state of the neuromorphic device array for the naive training and retraining scheme.

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”

P2SLAM: Bearing Based WiFi SLAM for Indoor Robots

by Aditya Arun, Roshan Ayyalasomayajula, William Hunter, Dinesh Bharadia in IEEE Robotics and Automation Letters

Engineers at the University of California San Diego have developed a low cost, low power technology to help robots accurately map their way indoors, even in poor lighting and without recognizable landmarks or features.

The technology consists of sensors that use WiFi signals to help the robot map where it’s going. It’s a new approach to indoor robot navigation. Most systems rely on optical light sensors such as cameras and LiDARs. In this case, the so-called “WiFi sensors” use radio frequency signals rather than light or visual cues to see, so they can work in conditions where cameras and LiDARs struggle — in low light, changing light, and repetitive environments such as long corridors and warehouses. And by using WiFi, the technology could offer an economical alternative to expensive and power hungry LiDARs, the researchers noted. A team of researchers from the Wireless Communication Sensing and Networking Group, led by UC San Diego electrical and computer engineering professor Dinesh Bharadia.

The robots movement over three timestamp are shown. Relative odometry is measured between two consecutive robot poses. At each timestamp, the robot ping’s an AP (orange arrow) and receives a pong reply (green arrow). For each ping and pong transmission the AP-sided and Robot-sided bearing of the signal is computed respectively.

“We are surrounded by wireless signals almost everywhere we go. The beauty of this work is that we can use these everyday signals to do indoor localization and mapping with robots,” said Bharadia.

“Using WiFi, we have built a new kind of sensing modality that fills in the gaps left behind by today’s light-based sensors, and it can enable robots to navigate in scenarios where they currently cannot,” added Aditya Arun, who is an electrical and computer engineering Ph.D. student in Bharadia’s lab and the first author of the study.

The researchers built their prototype system using off-the-shelf hardware. The system consists of a robot that has been equipped with the WiFi sensors, which are built from commercially available WiFi transceivers. These devices transmit and receive wireless signals to and from WiFi access points in the environment. What makes these WiFi sensors special is that they use this constant back and forth communication with the WiFi access points to map the robot’s location and direction of movement.

“This two-way communication is already happening between mobile devices like your phone and WiFi access points all the time — it’s just not telling you where you are,” said Roshan Ayyalasomayajula, who is also an electrical and computer engineering Ph.D. student in Bharadia’s lab and a co-author on the study. “Our technology piggybacks on that communication to do localization and mapping in an unknown environment.”

P2SLAM’s Design Overview.

Here’s how it works. At the start, the WiFi sensors are unaware of the robot’s location and where any of the WiFi access points are in the environment. Figuring that out is like playing a game of Marco Polo — as the robot moves, the sensors call out to the access points and listen for their replies, using them as landmarks. The key here is that every incoming and outgoing wireless signal carries its own unique physical information — an angle of arrival and direct path length to (or from) an access point — that can be used to figure out where the robot and access points are in relation to each other. Algorithms developed by Bharadia’s team enable the WiFi sensors to extract this information and make these calculations. As the call and response continues, the sensors pick up more information and can accurately locate where the robot is going.

The researchers tested their technology on a floor of an office building. They placed several access points around the space and equipped a robot with the WiFi sensors, as well as a camera and a LiDAR to perform measurements for comparison. The team controlled their robot to travel several times around the floor, turning corners, going down long and narrow corridors, and passing through both bright and dimly lit spaces.

In these tests, the accuracy of localization and mapping provided by the WiFi sensors was on par with that of the commercial camera and LiDAR sensors.

“We can use WiFi signals, which are essentially free, to do robust and reliable sensing in visually challenging environments,” said Arun. “WiFi sensing could potentially replace expensive LiDARs and complement other low cost sensors such as cameras in these scenarios.”

That’s what the team is now exploring. The researchers will be combining WiFi sensors (which provide accuracy and reliability) with cameras (which provide visual and contextual information about the environment) to develop a more complete, yet inexpensive, mapping technology.

Demonstration of Neuromodulation‐inspired Stashing System for Energy‐efficient Learning of Spiking Neural Network using a Self‐Rectifying Memristor Array

by Woon Hyung Cheong, Jae Bum Jeon, Jae Hyun In, Geunyoung Kim, Hanchan Song, Janho An, Juseong Park, Young Seok Kim, Cheol Seong Hwang, Kyung Min Kim in Advanced Functional Materials

Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. The research group led by Professor Kyung Min Kim from the Department of Materials Science and Engineering has developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation. The human brain changes its neural topology in real time, learning to store or recall memories as needed. The research group presented a new artificial intelligence learning method that directly implements these neural coordination circuit configurations.

Research on artificial intelligence is becoming very active, and the development of artificial intelligence-based electronic devices and product releases are accelerating, especially in the Fourth Industrial Revolution age. To implement artificial intelligence in electronic devices, customized hardware development should also be supported. However most electronic devices for artificial intelligence require high power consumption and highly integrated memory arrays for large-scale tasks. It has been challenging to solve these power consumption and integration limitations, and efforts have been made to find out how the human brain solves problems.

CTM array device and neuromorphic testing platform.

To prove the efficiency of the developed technology, the research group created artificial neural network hardware equipped with a self-rectifying synaptic array and algorithm called a ‘stashing system’ that was developed to conduct artificial intelligence learning. As a result, it was able to reduce energy by 37% within the stashing system without any accuracy degradation. This result proves that emulating the neuromodulation in humans is possible.

Professor Kim said, “In this study, we implemented the learning method of the human brain with only a simple circuit composition and through this we were able to reduce the energy needed by nearly 40 percent.”

This neuromodulation-inspired stashing system that mimics the brain’s neural activity is compatible with existing electronic devices and commercialized semiconductor hardware. It is expected to be used in the design of next-generation semiconductor chips for artificial intelligence.

Smart Core‐Shell Nanostructures for Force, Humidity, and Temperature Multi‐Stimuli Responsiveness

by Taher Abu Ali, Philipp Schäffner, Maria Belegratis, Gerburg Schider, Barbara Stadlober, Anna Maria Coclite in Advanced Materials Technologies

The “smart skin” developed by Anna Maria Coclite is very similar to human skin. It senses pressure, humidity and temperature simultaneously and produces electronic signals. More sensitive robots or more intelligent prostheses are thus conceivable.

The skin is the largest sensory organ and at the same time the protective coat of the human being. It “feels” several sensory inputs at the same time and reports information about humidity, temperature and pressure to the brain. For Anna Maria Coclite, a material with such multisensory properties is “a kind of ‘holy grail’ in the technology of intelligent artificial materials. In particular, robotics and smart prosthetics would benefit from a better integrated, more precise sensing system similar to human skin.” The ERC grant winner and researcher at the Institute of Solid State Physics at TU Graz has succeeded in developing the three-in-one hybrid material “smart skin” for the next generation of artificial, electronic skin using a novel process.

a) Cross-sectional view of the core-shell nanorod sensing concept, where force is directly sensed by a ZnO piezoelectric shell. The hydrogel core, which swells, senses humidity and temperature changes, and a resultant stress is applied onto the ZnO piezoelectric shell. b) F-H-T responsive sensor fabrication routine (dimensions are not shown to scale). Starting with a PET substrate, a BE is deposited using e-beam evaporation. A PUA template layer is then applied and nanostructured using UV-NIL. The ZnO piezoelectric shell is deposited using PEALD. Next, the hydrogel core consisting of p(NVCL-co-DEGDVE) is deposited using iCVD and finally, two TE designs are deposited with e-beam evaporation (indicated as single electrode field and six electrode fields). c) Scanning electron microscopy (SEM) image of a patterned PUA template prior to filling with core-shell structures. d) Photograph of the complete sensor design with a 1 cm2 TE active area under bending. e) Colorized SEM image featuring three core-shell nanorod structures: a conformal ZnO shell (yellow) deposited on the nanopatterned PUA (dark blue) and the hydrogel core (navy blue) completely filling the nanoholes. f) Corresponding geometry model used for finite element method (FEM) simulations.

For almost six years, the team worked on the development of smart skin as part of Coclite’s ERC project Smart Core. With 2,000 individual sensors per square millimetre, the hybrid material is even more sensitive than a human fingertip. Each of these sensors consists of a unique combination of materials: an smart polymer in the form of a hydrogel inside and a shell of piezoelectric zinc oxide. Coclite explains: “The hydrogel can absorb water and thus expands upon changes in humidity and temperature. In doing so, it exerts pressure on the piezoelectric zinc oxide, which responds to this and all other mechanical stresses with an electrical signal.”

The result is a wafer-thin material that reacts simultaneously to force, moisture and temperature with extremely high spatial resolution and emits corresponding electronic signals. “The first artificial skin samples are six micrometres thin, or 0.006 millimetres. But it could be even thinner,” says Anna Maria Coclite. In comparison, the human epidermis is 0.03 to 2 millimetres thick. The human skin perceives things from a size of about one square millimetre. The smart skin has a resolution that is a thousand times smaller and can register objects that are too small for human skin (such as microorganisms).

The individual sensor layers are very thin and at the same time equipped with sensor elements covering the entire surface. This was possible in a worldwide unique process for which the researchers combined three known methods from physical chemistry for the first time: a chemical vapour deposition for the hydrogel material, an atomic layer deposition for the zinc oxide and nanoprint lithography for the polymer template. The lithographic preparation of the polymer template was the responsibility of the research group “Hybrid electronics and structuring” headed by Barbara Stadlober. The group is part of Joanneum Research’s Materials Institute based in Weiz.

Several fields of application are now opening up for the skin-like hybrid material. In healthcare, for example, the sensor material could independently detect microorganisms and report them accordingly. Also conceivable are prostheses that give the wearer information about temperature or humidity, or robots that can perceive their environment more sensitively. On the path to application,smart skin scores with a decisive advantage: the sensory nanorods — the “smart core” of the material — are produced using a vapor-based manufacturing process. This process is already well established in production plants for integrated circuits, for example. The production of smart skin can thus be easily scaled and implemented in existing production lines.

Use of machine-learning algorithms to aid in the early detection of leptospirosis in dogs

by Krystle L. Reagan, Shaofeng Deng, Junda Sheng, Jamie Sebastian, Zhe Wang, Sara N. Huebner, Louise A. Wenke, Sarah R. Michalak, Thomas Strohmer, Jane E. Sykes in Journal of Veterinary Diagnostic Investigation

Leptospirosis, a disease that dogs can get from drinking water contaminated with Leptospira bacteria, can cause kidney failure, liver disease and severe bleeding into the lungs. Early detection of the disease is crucial and may mean the difference between life and death.

Veterinarians and researchers at the University of California, Davis, School of Veterinary Medicine have discovered a technique to predict leptospirosis in dogs through the use of artificial intelligence. After many months of testing various models, the team has developed one that outperformed traditional testing methods and provided accurate early detection of the disease. The groundbreaking discovery was published in Journal of Veterinary Diagnostic Investigation.

“Traditional testing for Leptospira lacks sensitivity early in the disease process,” said lead author Krystle Reagan, a board-certified internal medicine specialist and assistant professor focusing on infectious diseases. “Detection also can take more than two weeks because of the need to demonstrate a rise in the level of antibodies in a blood sample. Our AI model eliminates those two roadblocks to a swift and accurate diagnosis.”

The research involved historical data of patients at the UC Davis Veterinary Medical Teaching Hospital that had been tested for leptospirosis. Routinely collected blood work from these 413 dogs was used to train an AI prediction model. Over the next year, the hospital treated an additional 53 dogs with suspected leptospirosis. The model correctly identified all nine dogs that were positive for leptospirosis (100% sensitivity). The model also correctly identified approximately 90% of the 44 dogs that were ultimately leptospirosis negative. The goal for the model is for it to become an online resource for veterinarians to enter patient data and receive a timely prediction.

“AI-based, clinical decision making is going to be the future for many aspects of veterinary medicine,” said School of Veterinary Medicine Dean Mark Stetter. “I am thrilled to see UC Davis veterinarians and scientists leading that charge. We are committed to putting resources behind AI ventures and look forward to partnering with researchers, philanthropists, and industry to advance this science.”

Leptospirosis is a life-threatening zoonotic disease, meaning it can transfer from animals to humans. As the disease is also difficult to diagnose in people, Reagan hopes the technology behind this groundbreaking detection model has translational ability into human medicine.

“My hope is this technology will be able to recognize cases of leptospirosis in near real time, giving clinicians and owners important information about the disease process and prognosis,” said Reagan. “As we move forward, we hope to apply AI methods to improve our ability to quickly diagnose other types of infections.”

Videos

  • After four years of development, Flyability has announced the Elios 3, which you are more than welcome to smash into anything you like:
  • Skybrush is a drone-show management platform that’s now open source, and if drone shows aren’t your thing, it’s also good for coordinating multiple drones in any other way you want:
  • NASA’s InSight lander touched down in the Elysium Planitia region of Mars in November of 2018. During its time on the Red Planet, InSight has achieved all its primary science goals and continues to hunt for quakes on Mars:
  • Enjoy 8 minutes of fast-paced, extremely dramatic, absolutely mind-blowing robot football highlights:

Upcoming events

RSS 2022: 21–1 June 2022, New York

ERF 2022: 28–30 June 2022, Rotterdam, The Netherlands

ROBOCUP 2022: 11–17 July 2022, Bangkok, Thailand

IEEE CASE 2022: 20–24 August 2022, Mexico City, Mexico

CLAWAR 2022: 12–14 September 2022, Açores, Portugal

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--