RT/ Bone growth inspired ‘microrobots’ that can create their own bone

Paradigm
Paradigm
Published in
28 min readJan 25, 2022

--

Robotics biweekly vol.43, 11th January — 25th January

TL;DR

  • Inspired by the growth of bones in the skeleton, researchers have developed a combination of materials that can morph into various shapes before hardening. The material is initially soft, but later hardens through a bone development process that uses the same materials found in the skeleton.
  • Scientists teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own.
  • A team of researchers from City University of Hong Kong, Dalian University of Technology, Tsinghua University and the University of Electronic Science and Technology of China has developed a flexible skin patch that can provide haptic feedback to and from a person and a robot, allowing for teleoperated robots. They have published their results in Science Advances.
  • Researchers have pioneered a new fabrication technique that enables them to produce low-voltage, power-dense, high endurance soft actuators for an aerial microrobot. These artificial muscles vastly improve the robot’s payload and allow it to achieve best-in-class hovering performance.
  • Engineers at Stanford University, University of Washington and Cornell University recently developed a new framework that tries to achieve an optimal balance between the efficiency and comfort of robot-assisted feeding systems. Their approach, introduced in a paper pre-published on arXiv, is based on a computational method known as ‘heuristics-guided bi-directional rapidly exploring random trees (h-BiRRT).
  • Researchers at New York University have recently developed VINN, an alternative imitation learning framework that does not necessarily require large training datasets. This new approach, presented in a paper pre-published on arXiv, works by decoupling two different aspects of imitation learning, namely learning a task’s visual representations and the associated actions.
  • Intelligent packaging with sensors that monitor goods, such as vegetables, on long transport routes is a trend for the future. Yet printed and disposable electronics also cause problems: Metals in printing inks are expensive — and disposing of them in an environmentally sound manner is costly and exacerbates the problem of electronic waste.
  • Researchers at the Indian Institute of Technology Bhubaneswar, in collaboration with TCS Research and Wageningen University, recently devised a new strategy that could improve coordination among different robots tackling complex missions as a team. This strategy, introduced in a paper pre-published on arXiv, is based on a split-architecture that addresses communication and computations separately, while periodically coordinating the two to achieve optimal results.
  • A novel quadcopter capable of changing shape midflight is presented, allowing for operation in four configurations with the capability of sustained hover in three.
  • The MRV is SpaceLogistics’ next-generation on-orbit servicing vehicle, incorporating a robotic arm payload developed and integrated by the U.S. Naval Research Laboratory and provided by the U.S. Defense Advanced Research Projects Agency. In this test of Flight Robotic Arm System 1, the robotic arm is executing an exercise called the Gauntlet, which moves the arm through a series of poses that exercise the full motion of all seven degrees of freedom.
  • Yaqing Wang from JHU’s Terradynamics Lab gives a talk on trying to make a robot that is anywhere near as talented as a cockroach.
  • Check out robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Biohybrid Variable‐Stiffness Soft Actuators that Self‐Create Bone

by Danfeng Cao, Jose G. Martinez, Emilio Satoshi Hara, Edwin W. H. Jager in Advanced Materials

Inspired by the growth of bones in the skeleton, researchers at the universities of Linköping in Sweden and Okayama in Japan have developed a combination of materials that can morph into various shapes before hardening. The material is initially soft, but later hardens through a bone development process that uses the same materials found in the skeleton.

When we are born, we have gaps in our skulls that are covered by pieces of soft connective tissue called fontanelles. It is thanks to fontanelles that our skulls can be deformed during birth and pass successfully through the birth canal. Post-birth, the fontanelle tissue gradually changes to hard bone. Now, researchers have combined materials which together resemble this natural process.

Actuation principle of PPy-Alg and PPy-Alg-PMNF actuators, Alg crosslinking, PMNF mineralization principle, and fabrication of PPy-Alg and PPy-Alg-PMNF actuators. a) Schematic illustration of the electrochemical behavior of PPy-Alg actuator without (1) and with (2) PMNFs. After incubation in mineralizing medium (DMEM), PMNFs induced the formation of calcium phosphate minerals (amorphous calcium phosphate, ACP; hydroxyapatite, HAp), which promoted the stiffening of the Alg gel layer and consequent attenuation of the actuator’s mobility. b) Schematic illustration describing the possible crosslinking of the Alg gel (black) without (1) and with (2) PMNFs (blue). Schematic illustration of PMNF mineralization with the formation of ACP and HAp after 3 and 7 days of incubation in DMEM, respectively (3). c) Schematic illustration describing the fabrication procedure of the unpatterned PPy-Alg actuator. 1) Formation of a mold using insulating tape onto an Au/Si substrate. 2) Deposition of sodium Alg solution (with and without PMNFs) and gelification in CaCl2 solution. 3) Electrosynthesis of PPy layer inside the Alg gel. 4) Peeling of PPy-Alg actuator. d) The fabrication process of the patterned PPy-Alg actuators. 1) Drop-casting of sodium Alg gel with iron (III) (Fe3+). 2) Photolithographic patterning of the Alg gel using UV light. 3) Development of the pattern in the Alg gel with Fe3+. 4) Ion exchange of the Fe3+ using CaCl2 solution under UV light. 5) Crosslinking of the patterned Alg gel by Ca2+. 6) Electrosynthesis of PPy layer inside the Alg gel. e) Illustration of the unpatterned and patterned PPy-Alg actuators.

“We want to use this for applications where materials need to have different properties at different points in time. Firstly, the material is soft and flexible, and it is then locked into place when it hardens. This material could be used in, for example, complicated bone fractures. It could also be used in microrobots — these soft microrobots could be injected into the body through a thin syringe, and then they would unfold and develop their own rigid bones,” says Edwin Jager, associate professor at the Department of Physics, Chemistry and Biology (IFM) at Linköping University.

The idea was hatched during a research visit in Japan when materials scientist Edwin Jager met Hiroshi Kamioka and Emilio Hara, who conduct research into bones. The Japanese researchers had discovered a kind of biomolecule that could stimulate bone growth under a short period of time. Would it be possible to combine this biomolecule with Jager’s materials research, to develop new materials with variable stiffness?

In the study that followed, published in Advanced Materials, the researchers constructed a kind of simple “microrobot,” one which can assume different shapes and change stiffness. The researchers began with a gel material called alginate. On one side of the gel, a polymer material is grown. This material is electroactive, and it changes its volume when a low voltage is applied, causing the microrobot to bend in a specified direction. On the other side of the gel, the researchers attached biomolecules that allow the soft gel material to harden. These biomolecules are extracted from the cell membrane of a kind of cell that is important for bone development. When the material is immersed in a cell culture medium — an environment that resembles the body and contains calcium and phosphor — the biomolecules make the gel mineralise and harden like bone.

One potential application of interest to the researchers is bone healing. The idea is that the soft material, powered by the electroactive polymer, will be able to manoeuvre itself into spaces in complicated bone fractures and expand. When the material has then hardened, it can form the foundation for the construction of new bone. In their study, the researchers demonstrate that the material can wrap itself around chicken bones, and the artificial bone that subsequently develops grows together with the chicken bone.

By making patterns in the gel, the researchers can determine how the simple microrobot will bend when voltage is applied. Perpendicular lines on the surface of the material make the robot bend in a semicircle, while diagonal lines make it bend like a corkscrew.

“By controlling how the material turns, we can make the microrobot move in different ways, and also affect how the material unfurls in broken bones. We can embed these movements into the material’s structure, making complex programmes for steering these robots unnecessary,” says Edwin Jager.

In order to learn more about the biocompatibility of this combination of materials, the researchers are now looking further into how its properties work together with living cells.

Versatile carbon-loaded shellac ink for disposable printed electronics

by Alexandre Poulin, Xavier Aeby, Gilberto Siqueira, Gustav Nyström in Scientific Reports

More precise, faster, cheaper: Researchers all over the world have been working for years on producing electrical circuits using additive processes such as robotic 3-printing (so-called robocasting) — with great success, but this is now becoming a problem. The metal particles that make such “inks” electrically conductive are exacerbating the problem of electronic waste. Especially since the waste generated is likely to increase in the future in view of new types of disposable sensors, some of which are only used for a few days.

Unnecessary waste, thinks Gustav Nyström, head of Empa’s Cellulose & Wood Materials lab: “There is an urgent need for materials that balance electronic performance, cost and sustainability.”

To develop an environmentally friendly ink, Nyström’s team, therefore, set ambitious goals: metal-free, non-toxic, biodegradable. And with practical applications in mind: easily formable and stable to moisture and moderate heat.

(a) SEM micrographs of the graphite flakes that confer electrical conductivity to the composite ink. (b) SEM micrographs of the carbon black particles that ensure good electrical contact between the graphite flakes, as well as provide shear thinning gel properties to the ink. It can be seen from the higher magnification micrographs that large particles visible at lower magnification are in fact aggregates of nanosized carbon particles. © Illustration showing the different ink constituents, their distribution, and the creation of an electrical percolation network as solvent evaporates. (d) Chart presenting the range of working ink formulation as a function of the conductive particles/binder and graphite/carbon black ratios. The star identifies our optimal formulation. The need for structural integrity (i.e. no cracks formation during drying stage), shear thinning gel rheology, and electrical percolation network are the main limiting parameters.

The researchers chose inexpensive carbon as the conductive material, as they recently reported in the journal Scientific Reports. More precisely: elongated graphite platelets mixed with tiny soot particles that establish electrical contact between these platelets — all this in a matrix made of a well-known biomaterial: shellac, which is obtained from the excretions of scale insects. In the past, it was used to make records; today it is used, among other things, as a varnish for wooden instruments and fingernails. Its advantages correspond exactly to the researchers’ desired profile. And on top of that, it is soluble in alcohol — an inexpensive solvent that evaporates after the ink is applied so that it dries.

Despite these ingredients, the task proved challenging. That’s because whether used in simple screen printing or with modern 3D printers, the ink must exhibit shear thinning behavior: At “rest,” the ink is rather viscous. But at the moment of printing, when it is subjected to a lateral shear force, it becomes somewhat more fluid — just like a non-drip wall paint that only acquires a softer consistency when applied by the force of the roller. When used in additive manufacturing such as 3D printing with a robotic arm, however, this is particularly tricky: An ink that is too viscous would be too tough — but if it becomes too liquid during printing, the solid components could separate and clog the printer’s tiny nozzle.

To meet the requirements, the researchers tinkered intensively with the formulation for their ink. They tested two sizes of graphite platelets: 40 micrometers and 7 to 10 micrometers in length. Many variations were also needed in the mixing ratio of graphite and carbon black, because too much carbon black makes the material brittle — with the risk of cracking as the ink dries. By optimizing the formulation and the relative composition of the components, the team was able to develop several variants of the ink that can be used in different 2D and 3D printing processes.

“The biggest challenge was to achieve high electrical conductivity,” says Xavier Aeby, one of the researchers involved, “and at the same time form a gel-like network of carbon, graphite and shellac.” The team investigated how this material behaves in practice in several steps. For example, with a tiny test cuboid: 15 superimposed grids from the 3D printer — made of fine strands just 0.4 millimeters in diameter. This showed that the ink was also sufficient for demanding processes such as robocasting.

To prove its suitability for real components, the researchers constructed, among other things, a sensor for deformations: a thin PET strip with an ink structure printed on it, whose electrical resistance changed precisely with varying degrees of bending. In addition, tests for tensile strength, stability under water and other properties showed promising results — and so the research team is confident that the new material, which has already been patented, could prove itself in practice.

“We hope that this ink system can be used for applications in sustainable printed electronics,” says Gustav Nyström, “for example, for conductive tracks and sensor elements in smart packaging and biomedical devices or in the field of food and environmental sensing.”

Customizing skills for assistive robotic manipulators, an inverse reinforcement learning approach with error-related potentials

by Iason Batzianoulis, Fumiaki Iwane, Shupeng Wei, Carolina Gaspar Pinto Ramos Correia, Ricardo Chavarriaga, José del R. Millán, Aude Billard in Communications Biology

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own.

Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

“People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path.

“In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard.

The engineers began by improving the robot’s mechanism for avoiding obstacles so that it would be more precise.

“At first, the robot would choose a path that was too wide for some obstacles, taking it too far away, and not wide enough for others, keeping it too close,” says Carolina Gaspar Pinto Ramos Correia, a PhD student at Prof. Billard’s lab. “Since the goal of our robot was to help paralyzed patients, we had to find a way for users to be able to communicate with it that didn’t require speaking or moving.”

This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong — but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct. The process goes pretty quickly — only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes.

“The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.”

Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system — or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.”

a The robot follows trajectories generated from a planar dynamical system. The workspace of the robot (i.e., the table) is modeled with a vector-field and the robot’s trajectories are generated from the position initial position. Therefore, the robot follows a specific vector to reach its target. b An illustration of our approach. The robot moves towards the cube autonomously avoiding the glass with trajectories generated by a dynamical system. However, some trajectories (red dashed line) pass very close to the glass, creating a feeling of uncertainty to the user as the robot may collide with the glass (i.e., obstacle). This error expectation elicits ErrPs in the brain activity of the user and the output of the ErrPs decoder is associated with the robot trajectories. The desired trajectories are computed with the use of IRL. c The experimental protocol on the first experiment. The robot moves from left to right and vice versa performing an obstacle avoidance. The dashed dark lines correspond to the random trajectories of the robot, some of them could result in collision with the obstacle. The subject can deflect the joystick right or left to direct the robot accordingly or release the joystick for correcting the motion. This protocol corresponds to the calibration session of the second experiment too. d The experimental protocol in the second experiment. The subject commands the robot to grasp the object and place it on one of the four target positions (dashed circles) by pushing the joystick left, right, back or forward. The crimson objects correspond to the different obstacles placed in between the target positions. The green dashed line presents the target options for the user.

The researchers hope to eventually use their algorithm to control wheelchairs.

“For now there are still a lot of engineering hurdles to overcome,” says Prof. Billard. “And wheelchairs pose an entirely new set of challenges, since both the patient and the robot are in motion.”

The team also plans to use their algorithm with a robot that can read several different kinds of signals and coordinate data received from the brain with those from visual motor functions.

High Lift Micro‐Aerial‐Robot Powered by Low Voltage and Long Endurance Dielectric Elastomer Actuators

by Zhijian Ren, Suhan Kim, Xiang Ji, Weikun Zhu, Farnaz Niroui, Jing Kong, Yufeng Chen in Advanced Materials

When it comes to robots, bigger isn’t always better. Someday, a swarm of insect-sized robots might pollinate a field of crops or search for survivors amid the rubble of a collapsed building.

MIT researchers have demonstrated diminutive drones that can zip around with bug-like agility and resilience, which could eventually perform these tasks. The soft actuators that propel these microrobots are very durable, but they require much higher voltages than similarly-sized rigid actuators. The featherweight robots can’t carry the necessary power electronics that would allow them fly on their own.

Now, these researchers have pioneered a fabrication technique that enables them to build soft actuators that operate with 75 percent lower voltage than current versions while carrying 80 percent more payload. These soft actuators are like artificial muscles that rapidly flap the robot’s wings.

This new fabrication technique produces artificial muscles with fewer defects, which dramatically extends the lifespan of the components and increases the robot’s performance and payload.

“This opens up a lot of opportunity in the future for us to transition to putting power electronics on the microrobot. People tend to think that soft robots are not as capable as rigid robots. We demonstrate that this robot, weighing less than a gram, flies for the longest time with the smallest error during a hovering flight. The take-home message is that soft robots can exceed the performance of rigid robots,” says Kevin Chen, who is the D. Reid Weedon, Jr. ’41 assistant professor in the Department of Electrical Engineering and Computer Science, the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper.

Chen’s coauthors include Zhijian Ren and Suhan Kim, co-lead authors and EECS graduate students; Xiang Ji, a research scientist in EECS; Weikun Zhu, a chemical engineering graduate student; Farnaz Niroui, an assistant professor in EECS; and Jing Kong, a professor in EECS and principal investigator in RLE. The research has been accepted for publication in Advanced Materials and is included in the journal’s Rising Stars series, which recognizes outstanding works from early-career researchers.

The rectangular microrobot, which weighs less than one-fourth of a penny, has four sets of wings that are each driven by a soft actuator. These muscle-like actuators are made from layers of elastomer that are sandwiched between two very thin electrodes and then rolled into a squishy cylinder. When voltage is applied to the actuator, the electrodes squeeze the elastomer, and that mechanical strain is used to flap the wing.

The more surface area the actuator has, the less voltage is required. So, Chen and his team build these artificial muscles by alternating between as many ultrathin layers of elastomer and electrode as they can. As elastomer layers get thinner, they become more unstable.

For the first time, the researchers were able to create an actuator with 20 layers, each of which is 10 micrometers in thickness (about the diameter of a red blood cell). But they had to reinvent parts of the fabrication process to get there.

One major roadblock came from the spin coating process. During spin coating, an elastomer is poured onto a flat surface and rapidly rotated, and the centrifugal force pulls the film outward to make it thinner.

“In this process, air comes back into the elastomer and creates a lot of microscopic air bubbles. The diameter of these air bubbles is barely 1 micrometer, so previously we just sort of ignored them. But when you get thinner and thinner layers, the effect of the air bubbles becomes stronger and stronger. That is traditionally why people haven’t been able to make these very thin layers,” Chen explains.

He and his collaborators found that if they perform a vacuuming process immediately after spin coating, while the elastomer was still wet, it removes the air bubbles. Then, they bake the elastomer to dry it.

Removing these defects increases the power output of the actuator by more than 300 percent and significantly improves its lifespan, Chen says.

The researchers also optimized the thin electrodes, which are composed of carbon nanotubes, super-strong rolls of carbon that are about 1/50,000 the diameter of human hair. Higher concentrations of carbon nanotubes increase the actuator’s power output and reduce voltage, but dense layers also contain more defects.

For instance, the carbon nanotubes have sharp ends and can pierce the elastomer, which causes the device to short out, Chen explains. After much trial and error, the researchers found the optimal concentration.

Another problem comes from the curing stage — as more layers are added, the actuator takes longer and longer to dry.

“The first time I asked my student to make a multilayer actuator, once he got to 12 layers, he had to wait two days for it to cure. That is totally not sustainable, especially if you want to scale up to more layers,” Chen says.

They found that baking each layer for a few minutes immediately after the carbon nanotubes are transferred to the elastomer cuts down the curing time as more layers are added.

After using this technique to create a 20-layer artificial muscle, they tested it against their previous six-layer version and state-of-the-art, rigid actuators.

During liftoff experiments, the 20-layer actuator, which requires less than 500 volts to operate, exerted enough power to give the robot a lift-to-weight ratio of 3.7 to 1, so it could carry items that are nearly three times its weight.

They also demonstrated a 20-second hovering flight, which Chen says is the longest ever recorded by a sub-gram robot. Their hovering robot held its position more stably than any of the others. The 20-layer actuator was still working smoothly after being driven for more than 2 million cycles, far outpacing the lifespan of other actuators.

“Two years ago, we created the most power-dense actuator and it could barely fly. We started to wonder, can soft robots ever compete with rigid robots? We observed one defect after another, so we kept working and we solved one fabrication problem after another, and now the soft actuator’s performance is catching up. They are even a little bit better than the state-of-the-art rigid ones. And there are still a number of fabrication processes in material science that we don’t understand. So, I am very excited to continue to reduce actuation voltage,” he says.

Electronic skin as wireless human-machine interfaces for robotic VR

by Yiming Liu et al in Science Advances

A team of researchers from City University of Hong Kong, Dalian University of Technology, Tsinghua University and the University of Electronic Science and Technology of China has developed a flexible skin patch that can provide haptic feedback to and from a person and a robot, allowing for teleoperated robots. They have published their results in Science Advances.

Engineers have been developing robots that can be controlled remotely by a human operator, but, as the researchers note, most such systems are bulky and difficult to control. They also generally provide little feedback other than a video stream. In this new effort, the researchers in China sought to develop a more user-friendly system. To that end, they created what they call an electronic skin — a flexible skin patch that can be applied to the skin of a human controller that captures both movement and stress factors such as twisting and turning.

Design and architecture of the epidermal CL-HMI system.(A) A schematic illustration of the concept of robotic VR, where a nurse is wearing the CL-HMI to teleoperate a robot for reading a thermometer with virtual and haptic feedback. (B) Exploded view schematic illustration of the CL-HMI connecting with seven bending sensors (BSs) and five actuators. © Circuit diagram of the CL-HMI system. (D) Design of the flexible circuits in the CL-HMI. (E) Optical images and schematic illustration of the circuit design with enlarged views inset. (F and G) Finite element modeling (F) and optical images (G) of the CL-HMI in twisted, stretched, and bended configurations. Photo credit: Yiming Liu, Department of Biomedical Engineering, City University of Hong Kong.

The patch has sensors for reading information from its own sensors, wireless transmitters to send the information it is receiving, and small, vibrating magnets that assist with haptic feedback. Groups of patches are placed on the skin of an operator at important junctures such as the fold on the front of the arm over the elbow. Some of the sensors in the patch consist of wires placed in a zigzag fashion, which are pulled straighter as the patch is bent, providing information about body movement — bending an arm at the elbow, for example, or releasing it.

All of the combined data from the patches allow an operator to control a remote robot without having to wear clumsy gear. But there is more to the system: The patches are also applied to parts of the robot to allow the operator to receive feedback. Putting patches on the robot’s fingertips, for example, would allow the operator to feel the hardness of an object held by the robot, courtesy of the tiny vibrating magnets.

Balancing efficiency and comfort in robot-assisted bite transfer

by Suneel Belkhale et al in arXiv:2111.11401v1 [cs.RO]

Researchers at Stanford University, University of Washington and Cornell University recently developed a new framework that tries to achieve an optimal balance between the efficiency and comfort of robot-assisted feeding systems. Their approach, introduced in a paper pre-published on arXiv, is based on a computational method known as ‘heuristics-guided bi-directional rapidly exploring random trees (h-BiRRT).

Robots could be invaluable allies for older adults and people with physical disabilities, as they could assist them in their day-to-day life and reduce their reliance on human carers. A type of robotic systems that could be particularly helpful are assisted feeding or bite-transfer robots, which are designed to pick up food from a plate and feed humans who are unable to move their arms or coordinate their movements.

While many research teams worldwide have tried to develop robot-assisted feeding systems, most existing solutions do not consider how comfortable a user will feel when receiving a bite of food from the robot. In other words, these systems can be efficient at grasping and transferring foods of different shapes and sizes, but they do not consider how the bite will be received by users, for instance whether the robot will invertedly poke the user’s face or mouth with the fork while delivering the bite.

The team’s method finds feasible bite transfer trajectories in simulation. Given the food geometry and pose on the fork, we sample at least N goal food poses that are checked for collisions with the mouth geometry using a learned constraint model. Next, we cluster the goal poses and use heuristic-guided BiRRT to reach cluster centroids with comfort (blue) and bite volume efficiency (orange) heuristics.

“A lot of our previous work in this space focused on the problem of just picking up food off a plate,” Ethan K. Gordon, one of the researchers who carried out the study, told TechXplore. “Basically, the robot would bring the food close to the mouth and call it a day. However, in both formal and informal demos, new users would almost always express discomfort with the approach. It’s a fork, a sharp utensil, so the discomfort is understandable.”

Spatial comfort cost (red higher, green lower). The steeper cost gradient in the upward direction than downward ensures trajectories near the face have high “comfort” cost. Credit: Belkhale et al.

Building on their previous studies, Gordon and his colleagues set out to explore whether they could improve the comfort of robotic feeding systems. The overall objective of their recent paper was to better understand the feeling of discomfort reported by users doing trials and find a way to mitigate it.

Their approach works by identifying promising bite transfer trajectories in simulations. Concurrently, it also considers the geometry of the food and the pose of the fork, to ensure that it minimizes collisions with a user’s mouth.

“Our approach considers comfort directly,” Gordon explained. “Balancing it with ‘efficiency’ (i.e., how much of the food the user is able to theoretically take off the fork), we add it as an explicit cost heuristic to our motion planner.”

The team evaluated their new robot-assisted bite transfer framework in a series of real-world evaluations, using a Franka Emika Panda robotic arm. This system consists of a fork attached to an ATI Mini45 6-axis F/T sensor, via a 3D printed mount. The robot also integrates an external Intel Realsense RGB-D camera.

“Remarkably, we found that users significantly preferred the transfers produced by our approach to those produced by a baseline method,” Gordon said. “Our findings imply that a well-designed heuristic can go a long way towards making HRI systems more comfortable for the human collaborators with relatively little additional complexity on the robot side.”

In the future, the new approach could enhance the comfort of automated feeding systems, facilitating their deployment in healthcare facilities and other real-world settings. Meanwhile, Gordon and his colleagues plan to develop their framework further, for instance by making the robot more responsive to a user’s movements during the bite transfer itself.

“For example, we plan to focus on questions like: how should the robot adjust the trajectory if the user leans in to grab the food?” “Also, our recent work focused primarily on carrots and other analogous, hard, cylindrical foods. We definitely need to design a system that is able to handle foods of all different shapes and viscoelasticities.”

The surprising effectiveness of representation learning for visual imitation

by Jyothish Pari, Nur Muhammad Shafiullah, Sridhar Pandian Arunachalam, Lerrel Pinto in arXiv:2112.01511v2 [cs.RO]

Researchers at New York University have recently developed VINN, an alternative imitation learning framework that does not necessarily require large training datasets. This new approach, presented in a paper pre-published on arXiv, works by decoupling two different aspects of imitation learning, namely learning a task’s visual representations and the associated actions.

Over the past few decades, computer scientists have been trying to train robots to tackle a variety of tasks, including house chores and manufacturing processes. One of the most renowned strategies used to train robots on manual tasks is imitation learning.

As suggested by its name, imitation learning entails teaching a robot how to do something using human demonstrations. While in some studies this training strategy achieved very promising results, it often requires large and annotated datasets containing hundreds of videos where humans complete a given task.

Figure showing the two ‘halves’ of the researchers’ method, with representation learning on the left and behavior imitation through nearest neighbors on the right. Credit: Pari et al.

“I was interested in seeing how we can simplify imitation learning,” Jyo Pari, one of the researchers who carried out the study, told TechXplore. “Imitation learning requires two fundamental components; one is learning what is relevant in your scene and the other is how you can take the relevant features to perform a task. We wanted to decouple these components, which are traditionally coupled into one system, and understand the role and importance of each of them.”

Most existing imitation learning methods combine representation and behavior learning into a single system. The new technique created by Pari and his colleagues, on the other hand, focuses on representation learning, the process through which AI agents and robots learn to identify task-relevant features in a scene.

“We employed existing methods in self-supervised representation learning which is a popular area in the vision community,” Pari explained. “These methods can take a collection of images with no labels and extract the relevant features. Applying these methods to imitation is effective because we can identify which image in the demonstration dataset is most similar that the robot currently sees through a simple nearest neighbor search on the representations. Therefore, we can just make the robot copy the actions from similar demonstration images.”

Using the new imitation learning strategy they developed, Pari and his colleagues were able to enhance the performance of visual imitation models in simulated environments. They also tested their approach on a real robot, efficiently teaching it how to open a door by looking at similar demonstration images.

“I feel that our work is a foundation for future works that can utilize representation learning to enhance imitation learning models,” Pari said. “However, even if our methods were able to conduct a simple nearest neighbor task, they still have some drawbacks.”

In the future, the new framework could help to simplify imitation learning processes in robotics, facilitating their large-scale implementation. So far, Pari and his colleagues only used their strategy to train robots on simple tasks. In their next studies, they thus plan to explore possible strategies that would allow them to implement it on more complex tasks.

“Figuring out how to utilize the nearest neighbor’s robustness on more complex task with the capacity of parametric models is an interesting direction,” Pari added. “We are currently working on scaling up VINN to be able to not only do one task but multiple different ones.”

Concurrent transmission for multi-robot coordination

by Sourabha Bharadwaj, Karunakar Gonabattula, Sudipta Saha, Chayan Sarkar, Rekha Raja in arXiv:2112.00273

Researchers at the Indian Institute of Technology Bhubaneswar, in collaboration with TCS Research and Wageningen University, recently devised a new strategy that could improve coordination among different robots tackling complex missions as a team. This strategy, introduced in a paper pre-published on arXiv, is based on a split-architecture that addresses communication and computations separately, while periodically coordinating the two to achieve optimal results.

The researchers’ paper was recently presented at the IEEE RoboCom 2022 conference, held in conjunction with IEEE CCNC 2022, a top tier conference in the field of networking and distributed computing. At IEEE RoboCom 2022, it received the Best Paper Award.

“Swarm-robotics is on the path to becoming a key tool for human civilization,” Dr. Sudipta Saha, the lead researcher of the team that carried out the study, told TechXplore. “For instance, in medical science, it will be necessary to use numerous nano-bots to boost immune-therapy, targeted and effective drug transfer, etc.; while in the army it will be necessary for exploring unknown terrains that are hard for humans to enter, enabling agile supervision of borders and similar activities. In construction, it can enable technologies such as large-scale 3D printing and in agriculture it can help to monitor crop health and intervene to improve yields.”

Regardless of the context in which they are implemented, to perform well multi-robot teams need to be based on efficient communication and coordination systems. Conventional communication systems, however, force robots or devices to compete for a chance to share information with other systems. This process can waste significant time and result in high power consumption.

“For activities such as large-scale 3D printing, agriculture monitoring, etc., such losses are tolerable, but for time critical jobs such as nano-robotic drug delivery, firefighting or military activities, it would be too costly to compromise on the communication layer’s sub-optimal performance,” Dr. Saha said. “It’s always desirable to have a solution that enables swarms of robots to also carry out time-critical and serious jobs where precision cannot be sacrificed, due to unnecessary packet collisions and energy misuse.”

Credit: Bharadwaj et al.

To overcome the limitations of existing communication systems, Dr. Saha and his team created an entirely new paradigm that is based on an approach called ‘concurrent transmission’. Instead of putting devices or robots in competition, this approach allows them to cooperate to achieve a common goal.

“Applying concurrent transmission to a generic multi-robot platform is not a straightforward task,” Dr. Saha said. “In the Decentralized and Smart Systems Research Group (DSSRG) we are working on various aspects of concurrent-transmission based communication and its application in various contexts. Our expertise in this field helped us to come up with a novel split-architecture based solution for easy and fruitful use of concurrent-transmission for heterogeneous multi-robot systems.”

Concurrent transmission techniques have several advantages over conventional communication strategies. Most notably, they allow devices to share data with each other more rapidly and efficiently.

Despite its advantageous characteristics, concurrent transmission can be difficult to implement using generic hardware. So far, it thus primarily achieved good results when it was applied on specific and sophisticated hardware.

“Swarm robotics and multi-robot systems also have their own requirements, including control-system, AI/ML and other computation intensive tasks,” Dr. Saha said. “To bridge concurrent transmission with multi-robot systems, we proposed a split architecture where the communication and computations are done in two different hardware units that communicate with each other through a loosely coupled serial line communication. This way, we manage to get the benefit of both the domains at the same time.”

One of the experimental setups employed by the researchers. Credit: Bharadwaj et al.

It is not uncommon for roboticists to combine different types of hardware units into one, as Dr. Saha and his team did in their recent study. By coherently merging two distinct hardware systems into a collective architecture, they were able to attain very promising results, enhancing cooperation and communications among multiple robots.

The team specifically tested their communication system on a group of five two-wheeled robots. In their evaluations, they found that their system allowed the robots to efficiently coordinate with each other when forming different formations, while also moving dynamically and at similar speeds.

“One of the key achievements of our work is a seamless millisecond level time-synchronization among heterogeneous hardware units, in a purely decentralized manner and without exploiting any internet-connectivity or GPS,” Dr. Saha said. “Also, in this initial work, the use of a concurrent-transmission based communication framework with our split architecture-based strategy enabled us to achieve centimeter level precision among the robots, which indicates its value for executing time-critical and delicate missions using swarms of robots.”

In the future, the new concurrent transmission-based strategy created by this team of researchers could help to enhance cooperation between multiple robots in a flock during complex or time-sensitive missions. This includes, for instance, search and rescue efforts, military operations and surgical procedures.

“After our initial success, we are now going to carry out more rigorous and thorough studies especially on the interaction between the control-system aspects and the concurrent-transmission based communication mechanism,” Dr. Saha added. “We also plan to apply the mechanism to a large swarm of drones and ground vehicles and assess its capabilities.”

Videos

  • A novel quadcopter capable of changing shape midflight is presented, allowing for operation in four configurations with the capability of sustained hover in three.
  • The MRV is SpaceLogistics’ next-generation on-orbit servicing vehicle, incorporating a robotic arm payload developed and integrated by the U.S. Naval Research Laboratory and provided by the U.S. Defense Advanced Research Projects Agency. In this test of Flight Robotic Arm System 1, the robotic arm is executing an exercise called the Gauntlet, which moves the arm through a series of poses that exercise the full motion of all seven degrees of freedom.
  • Cassie Blue navigates around furniture used as obstacles in the Ford Robotics Building at the University of Michigan. All the clips in this video are magnified 1x on purpose to show Cassie’s motion.
  • Tapomayukh Bhattacharjee received a National Science Foundation (NSF) National Robotics Initiative (NBI) collaborative grant for a project that aims to address — and ameliorate — the way people with mobility issues are given a chance for improved control and independence over their environments, especially in how they are fed — or better, how they can feed themselves with robotic assistance.
  • Yaqing Wang from JHU’s Terradynamics Lab gives a talk on trying to make a robot that is anywhere near as talented as a cockroach.

Upcoming events

ICRA 2022: 23–27 MAY 2022, PHILADELPHIA

ERF 2022: 28–30 JUNE 2022, ROTTERDAM, GERMANY

CLAWAR 2022: 12–14 SEPTEMBER 2022, AÇORES, PORTUGAL

MISC

  • An initial series of test flights with drones has been launched in Poland as part of the EU-funded Uspace4UAM project. The first of these trials is now underway in Rzeszów, a city of close to 200,000 people.

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--