RT/ Artificial ‘neurotransistor’ created

Paradigm
Paradigm
Published in
28 min readJul 23, 2020

Robotics biweekly vol.9, 9th July — 23rd July

TL;DR

  • While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored efficiently: our own brain. Scientists have now successfully imitated the functioning of neurons using semiconductor materials.
  • Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity.
  • The novel system developed by computer scientists and materials engineers combines an artificial brain system with human-like electronic skin, and vision sensors, to make robots smarter.
  • A robot travelling from point A to point B is more efficient if it understands that point A is the living room couch and point B is a refrigerator. That’s the common sense idea behind a ‘semantic’ navigation system.
  • Medicated chewing gum has been recognized as a new advanced drug delivery method but currently there is no gold standard for testing drug release from chewing gum in vitro. New research has shown a chewing robot with built-in humanoid jaws could provide opportunities for pharmaceutical companies to develop medicated chewing gum.
  • Researchers have developed a tiny plastic robot, made of responsive polymers, which moves under the influence of light and magnetism. In the future this ‘wireless aquatic polyp’ should be able to attract and capture contaminant particles from the surrounding liquid or pick up and transport cells for analysis in diagnostic devices.
  • Kitchen robots are a popular vision of the future, but if a robot of today tries to grasp a kitchen staple such as a clear measuring cup or a shiny knife, it likely won’t be able to. Transparent and reflective objects are the things of robot nightmares. Roboticists at Carnegie Mellon University, however, report success with a new technique they’ve developed for teaching robots to pick up these troublesome objects.
  • Researchers have developed a tiny camera that can ride aboard an insect or an insect-sized robot.
  • The new model helps robots understand their environment as humans do.
  • July issue of #Science Robotics is out presenting artificial webs that take on the roles of spiderweb and spider; a wireless vision system for tiny robots; socially assistive robots for the COVID19 pandemic and beyond, and more to come.
  • Check out robotics upcoming events (mostly virtual) below. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Research articles

Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions

by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban, Gianaurelio Cuniberti in Nature Electronics

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain. For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics.

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely — we need new approaches,” Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance — called solgel — to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

Giving robots human-like perception of their physical environments

Wouldn’t we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as “Go to the kitchen and fetch me a coffee cup.”

To carry out such high-level tasks, researchers believe robots will have to be able to perceive their physical environment as humans do.

“In order to make any decision in the world, you need to have a mental model of the environment around you,” says Luca Carlone, assistant professor of aeronautics and astronautics at MIT. “This is something so effortless for humans.

But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.” Now Carlone and his students have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.

The new model, which they call 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment.

The model also allows the robot to extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path.

“This compressed representation of the environment is useful because it allows our robot to quickly make decisions and plan its path,” Carlone says. “This is not too far from what we do as humans. If you need to plan a path from your home to MIT, you don’t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.”

Beyond domestic helpers, Carlone says robots that adopt this new kind of mental model of the environment may also be suited for other high-level jobs, such as working side by side with people on a factory floor or exploring a disaster site for survivors.

He and his students, including lead author and MIT graduate student Antoni Rosinol, will present their findings this week at the Robotics: Science and Systems virtual conference.

A mapping mix

At the moment, robotic vision and navigation has advanced mainly along two routes: 3D mapping that enables robots to reconstruct their environment in three dimensions as they explore in real time; and semantic segmentation, which helps a robot classify features in its environment as semantic objects, such as a car versus a bicycle, which so far is mostly done on 2D images.

Carlone and Rosinol’s new model of spatial perception is the first to generate a 3D map of the environment in real-time, while also labeling objects, people (which are dynamic, contrary to objects), and structures within that 3D map.

The key component of the team’s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment, while encoding the likelihood that an object is, say, a chair versus a desk.

“Like the mythical creature that is a mix of different animals, we wanted Kimera to be a mix of mapping and semantic understanding in 3D,” Carlone says.

Kimera works by taking in streams of images from a robot’s camera, as well as inertial measurements from onboard sensors, to estimate the trajectory of the robot or camera and to reconstruct the scene as a 3D mesh, all in real-time.

To generate a semantic 3D mesh, Kimera uses an existing neural network trained on millions of real-world images, to predict the label of each pixel, and then projects these labels in 3D using a technique known as ray-casting, commonly used in computer graphics for real-time rendering.

The result is a map of a robot’s environment that resembles a dense, three-dimensional mesh, where each face is color-coded as part of the objects, structures, and people within the environment.

A layered scene

If a robot were to rely on this mesh alone to navigate through its environment, it would be a computationally expensive and time-consuming task. So the researchers built off Kimera, developing algorithms to construct 3D dynamic “scene graphs” from Kimera’s initial, highly dense, 3D semantic mesh.

Scene graphs are popular computer graphics models that manipulate and render complex scenes, and are typically used in video game engines to represent 3D environments.

In the case of the 3D dynamic scene graphs, the associated algorithms abstract, or break down, Kimera’s detailed 3D semantic mesh into distinct semantic layers, such that a robot can “see” a scene through a particular layer, or lens. The layers progress in hierarchy from objects and people, to open spaces and structures such as walls and ceilings, to rooms, corridors, and halls, and finally whole buildings.

Carlone says this layered representation avoids a robot having to make sense of billions of points and faces in the original 3D mesh.

Within the layer of objects and people, the researchers have also been able to develop algorithms that track the movement and the shape of humans in the environment in real time.

The team tested their new model in a photo-realistic simulator, developed in collaboration with MIT Lincoln Laboratory, that simulates a robot navigating through a dynamic office environment filled with people moving around.

“We are essentially enabling robots to have mental models similar to the ones humans use,” Carlone says. “This can impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robotics.

Another domain is virtual and augmented reality (AR). Imagine wearing AR goggles that run our algorithm: The goggles would be able to assist you with queries such as ‘Where did I leave my red mug?’ and ‘What is the closest exit?’

You can think about it as an Alexa which is aware of the environment around you and understands objects, humans, and their relations.”

Letting robots manipulate cables

Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity.

For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.

Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s new system uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.

The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based “GelSight” sensors, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.

The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.

When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.

As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.

“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”

String me along

Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).

This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.

What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile sensors. The gripper’s joints are also flexible — protecting them from potential impact.

The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.

When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task.

Looking ahead

The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.

In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.

Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez, MIT associate professor of mechanical engineering; and Edward Adelson, the John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences.

Which way to the fridge? Common sense helps robots navigate

A robot travelling from point A to point B is more efficient if it understands that point A is the living room couch and point B is a refrigerator. That’s the common sense idea behind a ‘semantic’ navigation system.

That navigation system, called SemExp, last month won the Habitat ObjectNav Challenge during the virtual Computer Vision and Pattern Recognition conference, edging a team from Samsung Research China. It was the second consecutive first-place finish for the CMU team in the annual challenge.

SemExp, or Goal-Oriented Semantic Exploration, uses machine learning to train a robot to recognize objects — knowing the difference between a kitchen table and an end table, for instance — and to understand where in a home such objects are likely to be found. This enables the system to think strategically about how to search for something, said Devendra S. Chaplot, a Ph.D. student in CMU’s Machine Learning Department.

“Common sense says that if you’re looking for a refrigerator, you’d better go to the kitchen,” Chaplot said. Classical robotic navigation systems, by contrast, explore a space by building a map showing obstacles. The robot eventually gets to where it needs to go, but the route can be circuitous.

Previous attempts to use machine learning to train semantic navigation systems have been hampered because they tend to memorize objects and their locations in specific environments. Not only are these environments complex, but the system often has difficulty generalizing what it has learned to different environments.

Chaplot — working with FAIR’s Dhiraj Gandhi, along with Abhinav Gupta, associate professor in the Robotics Institute, and Ruslan Salakhutdinov, professor in the Machine Learning Department — sidestepped that problem by making SemExp a modular system.

The system uses its semantic insights to determine the best places to look for a specific object, Chaplot said. “Once you decide where to go, you can just use classical planning to get you there.”

This modular approach turns out to be efficient in several ways. The learning process can concentrate on relationships between objects and room layouts, rather than also learning route planning. The semantic reasoning determines the most efficient search strategy. Finally, classical navigation planning gets the robot where it needs to go as quickly as possible.

Semantic navigation ultimately will make it easier for people to interact with robots, enabling them to simply tell the robot to fetch an item in a particular place, or give it directions such as “go to the second door on the left.”

Researchers gives robots intelligent sensing abilities to carry out complex tasks

The novel system developed by computer scientists and materials engineers combines an artificial brain system with human-like electronic skin, and vision sensors, to make robots smarter.

A team of computer scientists and materials engineers from the National University of Singapore (NUS) has recently demonstrated an exciting approach to make robots smarter. They developed a sensory integrated artificial brain system that mimics biological neural networks, which can run on a power-efficient neuromorphic processor, such as Intel’s Loihi chip. This novel system integrates artificial skin and vision sensors, equipping robots with the ability to draw accurate conclusions about the objects they are grasping based on the data captured by the vision and touch sensors in real-time.

“The field of robotic manipulation has made great progress in recent years. However, fusing both vision and tactile information to provide a highly precise response in milliseconds remains a technology challenge. Our recent work combines our ultra-fast electronic skins and nervous systems with the latest innovations in vision sensing and AI for robots so that they can become smarter and more intuitive in physical interactions,” said Assistant Professor Benjamin Tee from the NUS Department of Materials Science and Engineering. He co-leads this project with Assistant Professor Harold Soh from the Department of Computer Science at the NUS School of Computing.

The findings of this cross-disciplinary work were presented at the conference Robotics: Science and Systems conference in July 2020.

Human-like sense of touch for robots

Enabling a human-like sense of touch in robotics could significantly improve current functionality, and even lead to new uses. For example, on the factory floor, robotic arms fitted with electronic skins could easily adapt to different items, using tactile sensing to identify and grip unfamiliar objects with the right amount of pressure to prevent slipping.

In the new robotic system, the NUS team applied an advanced artificial skin known as Asynchronous Coded Electronic Skin (ACES) developed by Asst Prof Tee and his team in 2019. This novel sensor detects touches more than 1,000 times faster than the human sensory nervous system. It can also identify the shape, texture and hardness of objects 10 times faster than the blink of an eye.

“Making an ultra-fast artificial skin sensor solves about half the puzzle of making robots smarter. They also need an artificial brain that can ultimately achieve perception and learning as another critical piece in the puzzle,” added Asst Prof Tee, who is also from the NUS Institute for Health Innovation & Technology.

A human-like brain for robots

To break new ground in robotic perception, the NUS team explored neuromorphic technology — an area of computing that emulates the neural structure and operation of the human brain — to process sensory data from the artificial skin. As Asst Prof Tee and Asst Prof Soh are members of the Intel Neuromorphic Research Community (INRC), it was a natural choice to use Intel’s Loihi neuromorphic research chip for their new robotic system.

In their initial experiments, the researchers fitted a robotic hand with the artificial skin, and used it to read braille, passing the tactile data to Loihi via the cloud to convert the micro bumps felt by the hand into a semantic meaning. Loihi achieved over 92 per cent accuracy in classifying the Braille letters, while using 20 times less power than a normal microprocessor.

Asst Prof Soh’s team improved the robot’s perception capabilities by combining both vision and touch data in a spiking neural network. In their experiments, the researchers tasked a robot equipped with both artificial skin and vision sensors to classify various opaque containers containing differing amounts of liquid. They also tested the system’s ability to identify rotational slip, which is important for stable grasping.

In both tests, the spiking neural network that used both vision and touch data was able to classify objects and detect object slippage. The classification was 10 per cent more accurate than a system that used only vision. Moreover, using a technique developed by Asst Prof Soh’s team, the neural networks could classify the sensory data while it was being accumulated, unlike the conventional approach where data is classified after it has been fully gathered. In addition, the researchers demonstrated the efficiency of neuromorphic technology: Loihi processed the sensory data 21 per cent faster than a top performing graphics processing unit (GPU), while using more than 45 times less power.

Asst Prof Soh shared, “We’re excited by these results. They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step towards building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations.”

“This research from the National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities. The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture,” said Mr Mike Davies, Director of Intel’s Neuromorphic Computing Lab.

This research was supported by the National Robotics R&D Programme Office (NR2PO), a set-up that nurtures the robotics ecosystem in Singapore through funding research and development (R&D) to enhance the readiness of robotics technologies and solutions. Key considerations for NR2PO’s R&D investments include the potential for impactful applications in the public sector, and the potential to create differentiated capabilities for our industry.

Transparent, reflective objects now within grasp of robots

Kitchen robots are a popular vision of the future, but if a robot of today tries to grasp a kitchen staple such as a clear measuring cup or a shiny knife, it likely won’t be able to. Transparent and reflective objects are the things of robot nightmares.

Roboticists at Carnegie Mellon University, however, report success with a new technique they’ve developed for teaching robots to pick up these troublesome objects. The technique doesn’t require fancy sensors, exhaustive training or human guidance, but relies primarily on a color camera. The researchers will present this new system during this summer’s International Conference on Robotics and Automation virtual conference.

David Held, an assistant professor in CMU’s Robotics Institute, said depth cameras, which shine infrared light on an object to determine its shape, work well for identifying opaque objects. But infrared light passes right through clear objects and scatters off reflective surfaces. Thus, depth cameras can’t calculate an accurate shape, resulting in largely flat or hole-riddled shapes for transparent and reflective objects.

But a color camera can see transparent and reflective objects as well as opaque ones. So CMU scientists developed a color camera system to recognize shapes based on color. A standard camera can’t measure shapes like a depth camera, but the researchers nevertheless were able to train the new system to imitate the depth system and implicitly infer shape to grasp objects. They did so using depth camera images of opaque objects paired with color images of those same objects.

Once trained, the color camera system was applied to transparent and shiny objects. Based on those images, along with whatever scant information a depth camera could provide, the system could grasp these challenging objects with a high degree of success.

“We do sometimes miss,” Held acknowledged, “but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”

The system can’t pick up transparent or reflective objects as efficiently as opaque objects, said Thomas Weng, a Ph.D. student in robotics. But it is far more successful than depth camera systems alone. And the multimodal transfer learning used to train the system was so effective that the color system proved almost as good as the depth camera system at picking up opaque objects.

“Our system not only can pick up individual transparent and reflective objects, but it can also grasp such objects in cluttered piles,” he added.

Other attempts at robotic grasping of transparent objects have relied on training systems based on exhaustively repeated attempted grasps — on the order of 800,000 attempts — or on expensive human labeling of objects.

The CMU system uses a commercial RGB-D camera that’s capable of both color images (RGB) and depth images (D). The system can use this single sensor to sort through recyclables or other collections of objects — some opaque, some transparent, some reflective.

Wireless steerable vision for live insects and insect-scale robot

by Vikram Iyer, Ali Najafi, Johannes James, Sawyer Fuller, Shyamnath Gollakota in Science Robotics

In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.

The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams — about one-tenth the weight of a playing card — the team mounted it on top of live beetles and insect-sized robots.

The results will be published July 15 in Science Robotics.

“We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”

Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.

“Similar to cameras, vision in animals requires a lot of power,” said co-author Sawyer Fuller, a UW assistant professor of mechanical engineering. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”

To mimic an animal’s vision, the researchers used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm. The arm moves when the team applies a high voltage, which makes the material bend and move the camera to the desired position. Unless the team applies more power, the arm stays at that angle for about a minute before relaxing back to its original position. This is similar to how people can keep their head turned in one direction for only a short period of time before returning to a more neutral position.

“One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering. “We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”

The camera and arm are controlled via Bluetooth from a smartphone from a distance up to 120 meters away, just a little longer than a football field.

The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.

“We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”

The beetles also lived for at least a year after the experiment ended.

“We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”

The researchers also used their camera system to design the world’s smallest terrestrial, power-autonomous robot with wireless vision. This insect-sized robot uses vibrations to move and consumes almost the same power as low-power Bluetooth radios need to operate.

The team found, however, that the vibrations shook the camera and produced distorted images. The researchers solved this issue by having the robot stop momentarily, take a picture and then resume its journey. With this strategy, the system was still able to move about 2 to 3 centimeters per second — faster than any other tiny robot that uses vibrations to move — and had a battery life of about 90 minutes.

While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.

“As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.

Applications could range from biology to exploring novel environments, the researchers said. The team hopes that future versions of the camera will require even less power and be battery free, potentially solar-powered.

An artificial aquatic polyp that wirelessly attracts, grasps, and releases objects

by Marina Pilz da Cunha, Harkamaljot S. Kandail, Jaap M. J. den Toonder, Albert P. H. J. Schenning in Proceedings of the National Academy of Sciences

Researchers have developed a tiny plastic robot, made of responsive polymers, which moves under the influence of light and magnetism. In the future this ‘wireless aquatic polyp’ should be able to attract and capture contaminant particles from the surrounding liquid or pick up and transport cells for analysis in diagnostic devices.

Researchers at Eindhoven University of Technology developed a tiny plastic robot, made of responsive polymers, which moves under the influence of light and magnetism. In the future this ‘wireless aquatic polyp’ should be able to attract and capture contaminant particles from the surrounding liquid or pick up and transport cells for analysis in diagnostic devices. The researchers published their results in the journal PNAS.

The mini robot is inspired by a coral polyp; a small soft creature with tentacles, which makes up the corals in the ocean. Doctoral candidate Marina Pilz Da Cunha: “I was inspired by the motion of these coral polyps, especially their ability to interact with the environment through self-made currents.” The stem of the living polyps makes a specific movement that creates a current which attracts food particles. Subsequently, the tentacles grab the food particles floating by.

The developed wireless artificial polyp is 1 by 1 cm, has a stem that reacts to magnetism, and light steered tentacles. “Combining two different stimuli is rare since it requires delicate material preparation and assembly, but it is interesting for creating untethered robots because it allows for complex shape changes and tasks to be performed,” explains Pilz Da Cunha. The tentacles move by shining light on them. Different wavelengths lead to different results. For example, the tentacles ‘grab’ under the influence of UV light, while they ‘release’ with blue light.

From Land To Water

The device now presented can grab and release objects underwater, which is a new feature of the light-guided package delivery mini robot the researchers presented earlier this year. This land-based robot couldn’t work underwater, because the polymers making up that robot act through photothermal effects. The heat generated by the light fueled the robot, instead of the light itself. Pilz Da Cunha: “Heat dissipates in water, which makes it impossible to steer the robot under water.” She therefore developed a photomechanical polymer material that moves under the influence of light only. Not heat.

And that is not its only advantage. Next to operating underwater, this new material can hold its deformation after being activated by light. While the photothermal material immediately returns to its original shape after the stimuli has been removed, the molecules in the photomechanical material actually take on a new state. This allows different stable shapes, to be maintained for a longer period of time. “That helps to control the gripper arm; once something has been captured, the robot can keep holding it until it is addressed by light once again to release it,” says Pilz Da Cunha.

Flow Attracts Particles

By placing a rotating magnet underneath the robot, the stem circles around its axis. Pilz Da Cunha: “It was therefore possible to actually move floating objects in the water towards the polyp, in our case oil droplets.”

The position of the tentacles (open, closed or something in between), turned out to have an influence on the fluid flow. “Computer simulations, with different tentacle positions, eventually helped us to understand and get the movement of the stem exactly right. And to ‘attract’ the oil droplets towards the tentacles,” explains Pilz Da Cunha.

Operation Independent Of The Water Composition

An added advantage is that the robot operates independently from the composition of the surrounding liquid. This is unique, because the dominant stimuli-responsive material used for underwater applications nowadays, hydrogels, are sensitive for their environment. Hydrogels therefore behave differently in contaminated water. Pilz Da Cunha: “Our robot also works in the same way in salt water, or water with contaminants. In fact, in the future the polyp may be able to filter contaminants out of the water by catching them with its tentacles.”

Next Step: Swimming Robot

PhD student Pilz Da Cunha is now working on the next step: an array of polyps that can work together. She hopes to realize transport of particles, in which one polyp passes on a package to the other. A swimming robot is also on her wish list. Here, she thinks of biomedical applications such as capturing specific cells.

To achieve this, the researchers still have to work on the wavelengths to which the material responds. “UV light affects cells and the depth of penetration in the human body is limited. In addition, UV light might damage the robot itself, making it less durable. Therefore we want to create a robot that doesn’t need UV light as a stimuli,” concludes Pilz Da Cunha.

Development of a Chewing Robot with Built-in Humanoid Jaws to Simulate Mastication to Quantify Robotic Agents Release from Chewing Gums Compared to Human Participants

by Kazem Alemzadeh, Sian Bodfel Jones, Maria Davies, Nicola West in IEEE Transactions on Biomedical Engineering

Medicated chewing gum has been recognized as a new advanced drug delivery method but currently there is no gold standard for testing drug release from chewing gum in vitro. New research has shown a chewing robot with built-in humanoid jaws could provide opportunities for pharmaceutical companies to develop medicated chewing gum.

Medicated chewing gum has been recognised as a new advanced drug delivery method, with a promising future. Its potential has not yet been fully exploited because currently there is no gold standard for testing the release of agents from chewing gum in vitro. This study presents a novel humanoid chewing robot capable of closely replicating the human chewing motion in a closed environment, incorporating artificial saliva and allowing measurement of xylitol release from the gum. The release of xylitol from commercially available chewing gum was quantified following both in vitro and in vivo mastication. The chewing robot demonstrated a similar release rate of xylitol as human participants. The greatest release of xylitol occurred during the first 5 minutes of chewing and after 20 minutes of chewing only a low amount of xylitol remained in the gum bolus, irrespective of the chewing method used. Saliva and artificial saliva solutions respectively were collected after 5, 10, 15 and 20 minutes of continuous chewing and the amount of xylitol released from the chewing gum determined. Bioengineering has been implemented as the key engineering strategy to create an artificial oral environment that closely mimics that found in vivo. These results demonstrate the chewing robot with built-in humanoid jaws could provide opportunities for pharmaceutical companies to investigate and refine drug release from gum, with reduced patient exposure and reduced costs using this novel methodology.

Upcoming events

CLAWAR 2020 — August 24–26, 2020 — [Virtual Conference]

ICUAS 2020 — September 1–4, 2020 — Athens, Greece

ICRES 2020 — September 28–29, 2020 — Taipei, Taiwan

IROS 2020 — October 25–29, 2020 — Las Vegas, Nevada

ICSR 2020 — November 14–16, 2020 — Golden, Colorado

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Reddit.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--