RT/ High-speed AI drone overtakes world-champion drone racers

Paradigm
Paradigm
Published in
31 min readSep 7, 2023

Robotics biweekly vol.81, 24th August — 7th September

TL;DR

  • A group of researchers has set a new milestone with the first autonomous system capable of beating human champions at a physical sport: drone racing. The AI system ‘Swift’ has beaten the world champions — a result that seemed unattainable just a few years ago. The AI-piloted drone was trained in a simulated environment. Real-world applications include environmental monitoring or disaster response.
  • Research teams have detailed a pioneering breakthrough in medical device technology that could lead to intelligent, long-lasting, tailored treatment for patients, thanks to soft robotics and artificial intelligence.
  • Imagine a robot that can wedge itself through the cracks in rubble to search for survivors trapped in the wreckage of a collapsed building. Engineers are working toward to that goal with CLARI, short for Compliant Legged Articulated Robotic Insect.
  • A research team overcomes limitations of conventional strain sensors using computer vision integrated optical sensors.
  • An innovative bimanual robot displays tactile sensitivity close to human-level dexterity using AI to inform its actions.
  • A new machine-learning technique can efficiently learn to control a robot, leading to better performance with fewer data.
  • The global population of people older than 65 years of age is rapidly increasing the need for care. Although care robots are a promising solution to fill in for caregivers, their social implementation has been slow and unsatisfactory. A team of international researchers has now developed the first universal model that can be employed across cultural contexts to explain how ethical perceptions affect the willingness to use care robots.
  • Engineers have developed HADAR, or heat-assisted detection and ranging.
  • A new AI technique enables a robot to develop complex plans for manipulating an object using its entire hand, not just fingertips. This model can generate effective plans in about a minute using a standard laptop.
  • The new soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Champion-level drone racing using deep reinforcement learning

by Elia Kaufmann, Leonard Bauersfeld, Antonio Loquercio, Matthias Müller, Vladlen Koltun, Davide Scaramuzza in Nature

Remember when IBM’s Deep Blue won against Gary Kasparov at chess in 1996, or Google’s AlphaGo crushed the top champion Lee Sedol at Go, a much more complex game, in 2016? These competitions where machines prevailed over human champions are key milestones in the history of artificial intelligence. Now a group of researchers from the University of Zurich and Intel has set a new milestone with the first autonomous system capable of beating human champions at a physical sport: drone racing.

The AI system, called Swift, won multiple races against three world-class champions in first-person view (FPV) drone racing, where pilots fly quadcopters at speeds exceeding 100 km/h, controlling them remotely while wearing a headset linked to an onboard camera.

“Physical sports are more challenging for AI because they are less predictable than board or video games. We don’t have a perfect knowledge of the drone and environment models, so the AI needs to learn them by interacting with the physical world,” says Davide Scaramuzza, head of the Robotics and Perception Group at the University of Zurich — and newly minted drone racing team captain.

Until very recently, autonomous drones took twice as long as those piloted by humans to fly through a racetrack, unless they relied on an external position-tracking system to precisely control their trajectories. Swift, however, reacts in real time to the data collected by an onboard camera, like the one used by human racers. Its integrated inertial measurement unit measures acceleration and speed while an artificial neural network uses data from the camera to localize the drone in space and detect the gates along the racetrack. This information is fed to a control unit, also based on a deep neural network that chooses the best action to finish the circuit as fast as possible.

Drone racing.

Swift was trained in a simulated environment where it taught itself to fly by trial and error, using a type of machine learning called reinforcement learning. The use of simulation helped avoid destroying multiple drones in the early stages of learning when the system often crashes. “To make sure that the consequences of actions in the simulator were as close as possible to the ones in the real world, we designed a method to optimize the simulator with real data,” says Elia Kaufmann, first author of the paper. In this phase, the drone flew autonomously thanks to very precise positions provided by an external position-tracking system, while also recording data from its camera. This way it learned to autocorrect errors it made interpreting data from the onboard sensors.

The Swift system.

After a month of simulated flight time, which corresponds to less than an hour on a desktop PC, Swift was ready to challenge its human competitors: the 2019 Drone Racing League champion Alex Vanover, the 2019 MultiGP Drone Racing champion Thomas Bitmatta, and three-times Swiss champion Marvin Schaepper. The races took place between 5 and 13 June 2022, on a purpose-built track in a hangar of the Dübendorf Airport, near Zurich. The track covered an area of 25 by 25 meters, with seven square gates that had to be passed in the right order to complete a lap, including challenging maneuvers including a Split-S, an acrobatic feature that involves half-rolling the drone and executing a descending half-loop at full speed.

Overall, Swift achieved the fastest lap, with a half-second lead over the best lap by a human pilot. On the other hand, human pilots proved more adaptable than the autonomous drone, which failed when the conditions were different from what it was trained for, e.g., if there was too much light in the room.

Pushing the envelope in autonomous flight is important way beyond drone racing, Scaramuzza notes. “Drones have a limited battery capacity; they need most of their energy just to stay airborne. Thus, by flying faster we increase their utility.” In applications such as forest monitoring or space exploration, for example, flying fast is important to cover large spaces in a limited time. In the film industry, fast autonomous drones could be used for shooting action scenes. And the ability to fly at high speeds could make a huge difference for rescue drones sent inside a building on fire.

Soft robot–mediated autonomous adaptation to fibrotic capsule formation for improved drug delivery

by Rachel Beatty, Keegan L. Mendez, et al in Science Robotics

Research teams at University of Galway and Massachusetts Institute of Technology (MIT) have detailed a breakthrough in medical device technology that could lead to intelligent, long-lasting, tailored treatment for patients thanks to soft robotics and artificial intelligence.

The transatlantic partnership has created a smart implantable device that can administer a drug — while also sensing when it is beginning to be rejected — and use AI to change the shape of the device to maintain drug dosage and simultaneously bypass scar tissue build up.

Implantable medical device technologies offer promise to unlock advanced therapeutic interventions in healthcare, such as insulin release to treat diabetes, but a major issue holding back such devices is the patient’s reaction to a foreign body.

Dr Rachel Beatty, University of Galway, and co-lead author on the study, explained: “The technology which we have developed, by using soft robotics, advances the potential of implantable devices to be in a patient’s body for extended periods, providing long-lasting therapeutic action. Imagine a therapeutic implant that can also sense its environment and respond as needed using AI — this approach could generate revolutionary changes in implantable drug delivery for a range of chronic diseases.”

The University of Galway-MIT research team originally developed first-generation flexible devices, known as soft robotic implants, to improve drug delivery and reduce fibrosis. Despite that success, the team regard the technology as one-size-fits-all, as it did not account for how individual patients react and respond differently, or for the progressive nature of fibrosis, where scar tissue builds around the device, encapsulating it, impeding and blocking its purpose, eventually forcing it to fail. The latest research, demonstrates how they have significantly advanced the technology — using AI — making it responsive to the implant environment with the potential to be longer lasting by defending against the body’s natural urge to reject a foreign body.

Dr Beatty added: “I wanted to tailor drug delivery to individuals, but needed to create a method of sensing the foreign body response first.”

The soft robotic implant developed by University of Galway and MIT. Credit: Martina Regan.

The research team deployed an emerging technique to help reduce scar tissue formation known as mechanotherapy, where soft robotic implants make regular movements in the body, such as inflating and deflating. The timed, repetitive or varied movements help to prevent scar tissue from forming. The key to the advanced technology in the implantable device is a conductive porous membrane that can sense when pores are blocked by scar tissue. It detects the blockages as cells and the materials the cells produce block electrical signals travelling through the membrane.

The researchers measured electrical impedance and scar tissue formation on the membrane finding a correlation. A machine learning algorithm was also developed and deployed to predict the required number and force of actuations to achieve consistent drug dosing, regardless of the level of fibrosis present. Using computer simulations, the researchers also explored the potential of the device to release medication over time with a surrounding fibrotic capsule of different thicknesses. The research showed that changing the force and number of times the device was compelled to move or change shape allowed the device to release more drug, helping to bypass scar tissue build-up.

Professor Ellen Roche, Professor of Mechanical Engineering at MIT, said: “If we can sense how the individual’s immune system is responding to an implanted therapeutic device and modify the dosing regime accordingly, it could have great potential in personalised, precision drug delivery, reducing off-target effects and ensuring the right amount of drug is delivered at the right time. The work presented here is a step towards that goal.”

Professor Garry Duffy, Professor of Anatomy and Regenerative Medicine at University of Galway, and senior author on the study, said: “The device worked out the best regime to release a consistent dose, by itself, even when significant fibrosis was simulated. We showed a worst-case scenario of very thick and dense scar tissue around the device and it overcame this by changing how it pumps to deliver medication. We could finely control the drug release in a computational model and on the bench using soft robotics, regardless of significant fibrosis.”

The research team believe that their medical device breakthrough may pave the way for completely independent closed-loop implants that not only reduce fibrotic encapsulation, but sense it over time, and intelligently adjust their drug release activity in response.

Design of CLARI: A Miniature Modular Origami Passive Shape‐Morphing Robot

by Heiko Kabutz, Kaushik Jayaram in Advanced Intelligent Systems

Coming to a tight spot near you: CLARI, the little, squishable robot that can passively change its shape to squeeze through narrow gaps — with a bit of inspiration from the world of bugs.

CLARI, which stands for Compliant Legged Articulated Robotic Insect, comes from a team of engineers at the University of Colorado Boulder. It also has the potential to aid first responders after major disasters in an entirely new way. Several of these robots can easily fit in the palm of your hand, and each weighs less than a Ping Pong ball. CLARI can transform its shape from square to long and slender when its surroundings become cramped, said Heiko Kabutz, a doctoral student in the Paul M. Rady Department of Mechanical Engineering. Right now, CLARI has four legs. But the machine’s design allows engineers to mix and match its appendages, potentially giving rise to some wild and wriggly robots.

“It has a modular design, which means it’s very easy to customize and add more legs,” Kabutz said. “Eventually, we’d like to build an eight-legged, spider-style robot that could walk over a web.”

CLARI is still in its infancy, added Kaushik Jayaram, co-author of the study and an assistant professor of mechanical engineering at CU Boulder. The robot, for example, is tethered to wires, which supply it with power and send it basic commands. But he hopes that, one day, these petite machines could crawl independently into spaces where no robot has crawled before — like the insides of jet engines or the rubble of collapsed buildings.

a) CLARI — compliant-legged articulated robotic insect, a miniature robot, featured next to an Oklahoma brown tarantula ( body length) commonly found in Colorado. b) CLARI’s modular and compliant body allows it to vary body shapes and operate in multiple configurations. c) Some of the most successful legged robots, ranging from millimeters to meters sizes, all share a cuboidal body shape typically except for (IV) CLARI.

“Most robots today basically look like a cube,” Jayaram said. “Why should they all be the same? Animals come in all shapes and sizes.”

Jayaram is no stranger to robots that reflect the hodgepodge of the animal world. As a graduate student at the University of California, Berkeley, he designed a robot that could squeeze through narrow spaces by compressing down to about half its height — just like cockroaches wedging their way through cracks in a wall. But that machine, he said, represented just the tip of the iceberg where animal flexibility is concerned.

“We were able to squeeze through vertical gaps,” he said. “But that got me thinking: That’s one way to compress. What are others?”

Which is where CLARI, made to squeeze through horizontal gaps, scuttles into the picture. In its most basic form, the robot is shaped like a square with one leg along each of its four sides. Depending on how you squeeze CLARI, however, it can become wider, like a crab, or more elongated, like Jayaram’s old favorite, the cockroach. In all, the robot can morph from about 34 millimeters (1.3 inches) wide in its square shape to about 21 millimeters (0.8 inches) wide in its elongated form.

Unlike Jayaram’s earlier mechanized cockroach, each of CLARI’s legs functions almost like an independent robot — with its own circuit board and dual actuators that move the leg forward and backward and side-to-side, similar to a human hip joint. Theoretically, that modularity might allow CLARI robots to take on a wide variety of shapes.

“What we want are general-purpose robots that can change shape and adapt to whatever the environmental conditions are,” Jayaram said. “In the animal world, that might be something like an amoeba, which has no well-defined shape but can change depending on whether it needs to move fast or engulf some food.”

He and Kabutz see their current design as the first in a series of CLARI robots that they hope will become smaller and more nimble. In future iterations, the researchers want to incorporate sensors into CLARI so that it can detect and react to obstacles. The group is also examining how to give the robot the right mix of flexibility and strength, Kabutz said — a task that will only get more difficult the more legs the team adds on. Ultimately, the team wants to develop shape-changing robots that don’t just move through a lab environment but a complex, natural space — in which the machines will need to bounce off obstacles like trees or even blades of grass or push through the cracks between rocks and keep going.

“When we try to catch an insect, they can disappear into a gap,” Kabutz said. “But if we have robots with the capabilities of a spider or a fly, we can add cameras or sensors, and now we’re able to start exploring spaces we couldn’t get into before.”

Real-time multiaxial strain mapping using computer vision integrated optical sensors

by Sunguk Hong, Vega Pradana Rachim, Jin-Hyeok Baek, Sung-Min Park in npj Flexible Electronics

Recently, a Korean company donated a wearable robot, designed to aid patients with limited mobility during their rehabilitation, to a hospital. These patients wear this robot to receive assistance for muscle and joint exercises while performing actions such as walking or sitting. Wearable devices including smartwatches or eyewear that people wear and attached to their skin have the potential to enhance our quality of life, offering a glimmer of hope to some people much like this robotic innovation.

The strain sensors used in these rehabilitative robots analyze data by translating specific physical changes in specific regions into electric signals. Notably flexible, these sensors are pliable and adept at gauging even the most subtle bodily changes as they are made from lightweight materials for ease of attachment to the skin. However, conventional soft strain sensors often exhibit inadequate durability due to susceptibility to external factors such as temperature and humidity. Furthermore, their complicated fabrication process poses challenges for widespread commercialization.

A research team led by Professor Sung-Min Park from the Department of Convergence IT Engineering and the Department of Mechanical Engineering and PhD candidate Sunguk Hong from the Department of Mechanical Engineering at Pohang University of Science and Technology (POSTECH) has successfully overcome the limitations of these soft strain sensors by integrating computer vision technology into optical sensors.

Design and mechanism of CVOS sensor.

The research team developed a sensor technology known as computer vision-based optical strain (CVOS) during their study. Unlike conventional sensors reliant on electrical signals, CVOS sensors employ computer vision and optical sensors to analyze microscale optical patterns, extracting data regarding changes. This approach inherently enhances durability by eliminating elements that compromise sensor functionalities and streamlining fabrication processes, thereby facilitating sensor commercialization.

In contrast to conventional sensors that solely detect biaxial strain, CVOS sensors exhibit the exceptional ability to detect three-axial rotational movements through real-time multiaxial strain mapping. In essence, these sensors enable the precise recognition of intricate and various bodily motions through a single sensor. The research team substantiated this claim through experiments applying CVOS sensors to assistive devices in rehabilitative treatments.

Through integration of an AI-based response correction algorithm that corrects diverse error factors arising during signal detection, the experiment results showed a high level of confidence. Even after undergoing more than 10,000 iterations, these sensors consistently maintained their exceptional performance.

Professor Sung-Min Park who led the research explained, “The CVOS sensors excel in distinguishing body movements across diverse direction and angles, thereby optimizing effective rehabilitative interventions.” He further added, “By tailoring design indicators and algorithms to align with specific objectives, CVOS sensors have boundless potential for applications spanning industries.”

Bi-Touch: Bimanual Tactile Manipulation With Sim-to-Real Deep Reinforcement Learning

by Yijiong Lin, Alex Church, Max Yang, Haoran Li, John Lloyd, Dandan Zhang, Nathan F. Lepora in IEEE Robotics and Automation Letters

An innovative bimanual robot displays tactile sensitivity close to human-level dexterity using AI to inform its actions.

The new Bi-Touch system, designed by scientists at the University of Bristol and based at the Bristol Robotics Laboratory, allows robots to carry out manual tasks by sensing what to do from a digital helper. The findings show how an AI agent interprets its environment through tactile and proprioceptive feedback, and then control the robots’ behaviours, enabling precise sensing, gentle interaction, and effective object manipulation to accomplish robotic tasks. This development could revolutionise industries such as fruit picking, domestic service, and eventually recreate touch in artificial limbs.

Lead author Yijiong Lin from the Faculty of Engineering, explained: “With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch. And more importantly, we can directly apply these agents from the virtual world to the real world without further training.

“The tactile bimanual agent can solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way.”

Bimanual manipulation with tactile feedback will be key to human-level robot dexterity. However, this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces. The team were able to develop a tactile dual-arm robotic system using recent advances in AI and robotic tactile sensing.

Dual arm robot holding crisp.

The researchers built up a virtual world (simulation) that contained two robot arms equipped with tactile sensors. They then design reward functions and a goal-update mechanism that could encourage the robot agents to learn to achieve the bimanual tasks and developed a real-world tactile dual-arm robot system to which they could directly apply the agent. The robot learns bimanual skills through Deep Reinforcement Learning (Deep-RL), one of the most advanced techniques in the field of robot learning. It is designed to teach robots to do things by letting them learn from trial and error akin to training a dog with rewards and punishments.

For robotic manipulation, the robot learns to make decisions by attempting various behaviours to achieve designated tasks, for example, lifting up objects without dropping or breaking them. When it succeeds, it gets a reward, and when it fails, it learns what not to do. With time, it figures out the best ways to grab things using these rewards and punishments. The AI agent is visually blind relying only on proprioceptive feedback — a body’s ability to sense movement, action and location and tactile feedback. They were able to successfully enable to the dual arm robot to successfully safely lift items as fragile as a single Pringle crisp.

Co-author Professor Nathan Lepora added: “Our Bi-Touch system showcases a promising approach with affordable software and hardware for learning bimanual behaviours with touch in simulation, which can be directly applied to the real world. Our developed tactile dual-arm robot simulation allows further research on more different tasks as the code will be open-source, which is ideal for developing other downstream tasks.”

Yijiong concluded: “Our Bi-Touch system allows a tactile dual-arm robot to learn sorely from simulation, and to achieve various manipulation tasks in a gentle way in the real world. “And now we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch.”

Learning Control-Oriented Dynamical Structure from Data

by Spencer M. Richards, Jean-Jacques Slotine, Navid Azizan, Marco Pavone in arXiv

Researchers from MIT and Stanford University have devised a new machine-learning approach that could be used to control a robot, such as a drone or autonomous vehicle, more effectively and efficiently in dynamic environments where conditions can change rapidly.

This technique could help an autonomous vehicle learn to compensate for slippery road conditions to avoid going into a skid, allow a robotic free-flyer to tow different objects in space, or enable a drone to closely follow a downhill skier despite being buffeted by strong winds. The researchers’ approach incorporates certain structure from control theory into the process for learning a model in such a way that leads to an effective method of controlling complex dynamics, such as those caused by impacts of wind on the trajectory of a flying vehicle. One way to think about this structure is as a hint that can help guide how to control a system.

“The focus of our work is to learn intrinsic structure in the dynamics of the system that can be leveraged to design more effective, stabilizing controllers,” says Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS). “By jointly learning the system’s dynamics and these unique control-oriented structures from data, we’re able to naturally create controllers that function much more effectively in the real world.”

Using this structure in a learned model, the researchers’ technique immediately extracts an effective controller from the model, as opposed to other machine-learning methods that require a controller to be derived or learned separately with additional steps. With this structure, their approach is also able to learn an effective controller using fewer data than other approaches. This could help their learning-based control system achieve better performance faster in rapidly changing environments.

“This work tries to strike a balance between identifying structure in your system and just learning a model from data,” says lead author Spencer M. Richards, a graduate student at Stanford University. “Our approach is inspired by how roboticists use physics to derive simpler models for robots. Physical analysis of these models often yields a useful structure for the purposes of control — one that you might miss if you just tried to naively fit a model to data. Instead, we try to identify similarly useful structure from data that indicates how to implement your control logic.”

Additional authors of the paper are Jean-Jacques Slotine, professor of mechanical engineering and of brain and cognitive sciences at MIT, and Marco Pavone, associate professor of aeronautics and astronautics at Stanford. The research will be presented at the International Conference on Machine Learning (ICML).

Determining the best way to control a robot to accomplish a given task can be a difficult problem, even when researchers know how to model everything about the system. A controller is the logic that enables a drone to follow a desired trajectory, for example. This controller would tell the drone how to adjust its rotor forces to compensate for the effect of winds that can knock it off a stable path to reach its goal. This drone is a dynamical system — a physical system that evolves over time. In this case, its position and velocity change as it flies through the environment. If such a system is simple enough, engineers can derive a controller by hand.

Modeling a system by hand intrinsically captures a certain structure based on the physics of the system. For instance, if a robot were modeled manually using differential equations, these would capture the relationship between velocity, acceleration, and force. Acceleration is the rate of change in velocity over time, which is determined by the mass of and forces applied to the robot.

But often the system is too complex to be exactly modeled by hand. Aerodynamic effects, like the way swirling wind pushes a flying vehicle, are notoriously difficult to derive manually, Richards explains. Researchers would instead take measurements of the drone’s position, velocity, and rotor speeds over time, and use machine learning to fit a model of this dynamical system to the data. But these approaches typically don’t learn a control-based structure. This structure is useful in determining how to best set the rotor speeds to direct the motion of the drone over time. Once they have modeled the dynamical system, many existing approaches also use data to learn a separate controller for the system.

“Other approaches that try to learn dynamics and a controller from data as separate entities are a bit detached philosophically from the way we normally do it for simpler systems. Our approach is more reminiscent of deriving models by hand from physics and linking that to control,” Richards says.

The team from MIT and Stanford developed a technique that uses machine learning to learn the dynamics model, but in such a way that the model has some prescribed structure that is useful for controlling the system. With this structure, they can extract a controller directly from the dynamics model, rather than using data to learn an entirely separate model for the controller.

“We found that beyond learning the dynamics, it’s also essential to learn the control-oriented structure that supports effective controller design. Our approach of learning state-dependent coefficient factorizations of the dynamics has outperformed the baselines in terms of data efficiency and tracking capability, proving to be successful in efficiently and effectively controlling the system’s trajectory,” Azizan says.

When they tested this approach, their controller closely followed desired trajectories, outpacing all the baseline methods. The controller extracted from their learned model nearly matched the performance of a ground-truth controller, which is built using the exact dynamics of the system.

“By making simpler assumptions, we got something that actually worked better than other complicated baseline approaches,” Richards adds.

The researchers also found that their method was data-efficient, which means it achieved high performance even with few data. For instance, it could effectively model a highly dynamic rotor-driven vehicle using only 100 data points. Methods that used multiple learned components saw their performance drop much faster with smaller datasets. This efficiency could make their technique especially useful in situations where a drone or robot needs to learn quickly in rapidly changing conditions. Plus, their approach is general and could be applied to many types of dynamical systems, from robotic arms to free-flying spacecraft operating in low-gravity environments.

Developing a model to explain users’ ethical perceptions regarding the use of care robots in home care: A cross-sectional study in Ireland, Finland, and Japan

by Hiroo Ide, Sayuri Suwa, Yumi Akuta, Naonori Kodate, Mayuko Tsujimura, Mina Ishimaru, Atsuko Shimamura, Helli Kitinoja, Sarah Donnelly, Jaakko Hallila, Marika Toivonen, Camilla Bergman-Kärpijoki, Erika Takahashi, Wenwei Yu in Archives of Gerontology and Geriatrics

Countries like Japan are experiencing declining birth rates and an aging population. The increased burden of care for this aging population may lead to a shortage of caregivers in a decade’s time. Thus, the recruitment and allocation of resources must be planned in advance. Technological interventions in the form of robots that provide home care services to the aged appear to be a promising solution to this problem.

Although care robots are being developed and improved at a rapid pace, their social acceptance has been limited. It is suspected that the ethical issues surrounding the use of such robots may be obstructing the implementation of this technology. Many acceptance models have demonstrated that the ethical perceptions of older people, their families, and professional caregivers regarding care robots can impact their willingness to adopt this technology. However, there is no universal model that can elucidate the relationship between ethical perceptions and the willingness to use care robots across countries and cultural contexts.

To fill this knowledge gap, a team of international researchers led by Professor Sayuri Suwa from Chiba University, including Dr. Hiroo Ide from the University of Tokyo, Dr. Yumi Akuta from Tokyo Healthcare University, Dr. Naonori Kodate from University College Dublin, Dr. Jaakko Hallila from Seinäjoki University of Applied Sciences, and Dr. Wenwei Yu from Chiba University, among others, conducted a cross-sectional study across Japan, Ireland, and Finland.

Sharing the motivation behind the study, Prof. Suwa explains, “Today, in Japan’s super-aged society, various care robots, including monitoring cameras, have been developed and marketed to compensate for the shortage of care staff and to alleviate their stress. However, there are no discussions among users — older people, family caregivers, and care staff — and developers regarding the willingness to use care robots, the protection of privacy, and the appropriate use of personal information associated with the use of care robots. The desire to improve this situation and to promote appropriate utilization of care robots beyond Japan was the impetus for this research.”

Illustrations of home-care robots provided in the questionnaire form.

The team developed a questionnaire that examined the ethical issues that could affect the willingness to use a care robot across the three countries. The survey was conducted between November 2018 and February 2019 among older people, their family caregivers, and professional caregivers. This study was also reviewed by multiple ethical committees in all three countries. The researchers analyzed a total of 1,132 responses, which comprised 664 responses from Japan, 208 from Ireland, and 260 from Finland. They found that the willingness to use care robots was highest in Japan (77.1%), followed by Ireland (70.3%), and was lowest in Finland (52.8%).

Next, the researchers developed a conceptual model and evaluated it using statistical methods. From the questionnaire, the researchers included responses to ten items in the model, categorized into four broad domains — acquisition of personal information, use of personal information for medical and long-term care, secondary use of personal information, and participation in research and development. They then improved the model using Akaike’s information criterion (AIC). The model underwent incremental improvements to attain better (smaller) AIC values. The final model was then applied to each country.

Thus, this study demonstrated the successful use of a single universal model that could explain the correlation between ethical perceptions and social implementation of care robots across three countries with different geographies, demographics, cultures, and systems.

Discussing the importance and long-term impact of their study, Prof. Suwa concludes, “From our results, we can infer that social implementation of care robots can be promoted if developers and researchers encourage potential users to participate in the development process, proposed in the form of a co-design and co-production concept. We hope that the process of developing care robots will be improved to contribute to human well-being in a global aging society.”

Heat-assisted detection and ranging

by Fanglin Bao, Xueji Wang, Shree Hari Sureshbabu, Gautam Sreekumar, Liping Yang, Vaneet Aggarwal, Vishnu N. Boddeti, Zubin Jacob in Nature

Researchers at Purdue University are advancing the world of robotics and autonomy with their patent-pending method that improves on traditional machine vision and perception.

Zubin Jacob, the Elmore Associate Professor of Electrical and Computer Engineering in the Elmore Family School of Electrical and Computer Engineering, and research scientist Fanglin Bao have developed HADAR, or heat-assisted detection and ranging. Jacob said it is expected that one in 10 vehicles will be automated and that there will be 20 million robot helpers that serve people by 2030.

“Each of these agents will collect information about its surrounding scene through advanced sensors to make decisions without human intervention,” Jacob said. “However, simultaneous perception of the scene by numerous agents is fundamentally prohibitive.”

Traditional active sensors like LiDAR, or light detection and ranging, radar and sonar emit signals and subsequently receive them to collect 3D information about a scene. These methods have drawbacks that increase as they are scaled up, including signal interference and risks to people’s eye safety. In comparison, video cameras that work based on sunlight or other sources of illumination are advantageous, but low-light conditions such as nighttime, fog or rain present a serious impediment. Traditional thermal imaging is a fully passive sensing method that collects invisible heat radiation originating from all objects in a scene. It can sense through darkness, inclement weather and solar glare. But Jacob said fundamental challenges hinder its use today.

“Objects and their environment constantly emit and scatter thermal radiation, leading to textureless images famously known as the ‘ghosting effect,’” Bao said. “Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost. This loss of information, texture and features is a roadblock for machine perception using heat radiation.”

HADAR TeX vision algorithms.

HADAR combines thermal physics, infrared imaging and machine learning to pave the way to fully passive and physics-aware machine perception.

“Our work builds the information theoretic foundations of thermal perception to show that pitch darkness carries the same amount of information as broad daylight. Evolution has made human beings biased toward the daytime. Machine perception of the future will overcome this long-standing dichotomy between day and night,” Jacob said.

Bao said, “HADAR vividly recovers the texture from the cluttered heat signal and accurately disentangles temperature, emissivity and texture, or TeX, of all objects in a scene. It sees texture and depth through the darkness as if it were day and also perceives physical attributes beyond RGB, or red, green and blue, visible imaging or conventional thermal sensing. It is surprising that it is possible to see through pitch darkness like broad daylight.”

The team tested HADAR TeX vision using an off-road nighttime scene.

“HADAR TeX vision recovered textures and overcame the ghosting effect,” Bao said. “It recovered fine textures such as water ripples, bark wrinkles and culverts in addition to details about the grassy land.”

Additional improvements to HADAR are improving the size of the hardware and the data collection speed.

“The current sensor is large and heavy since HADAR algorithms require many colors of invisible infrared radiation,” Bao said. “To apply it to self-driving cars or robots, we need to bring down the size and price while also making the cameras faster. The current sensor takes around one second to create one image, but for autonomous cars we need around 30 to 60 hertz frame rate, or frames per second.”

HADAR TeX vision’s initial applications are automated vehicles and robots that interact with humans in complex environments. The technology could be further developed for agriculture, defense, geosciences, health care and wildlife monitoring applications.

Global Planning for Contact-Rich Manipulation via Local Smoothing of Quasi-Dynamic Contact Models

by Tao Pang, H. J. Terry Suh, Lujie Yang, Russ Tedrake in IEEE Transactions on Robotics

Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.

Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.

While still in its early days, this method could potentially enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies, rather than large robotic arms that can only grasp using fingertips. This may help reduce energy consumption and drive down costs. In addition, this technique could be useful in robots sent on exploration missions to Mars or other solar system bodies, since they could adapt to the environment quickly using only an onboard computer.

This model can generate effective plans in about a minute using a standard laptop. Here, a robot attempts to rotate a bucket 180 degrees.

“Rather than thinking about this as a black-box system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans,” says H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this technique. Joining Suh on the paper are co-lead author.

Reinforcement learning is a machine-learning technique where an agent, like a robot, learns to complete a task through trial and error with a reward for getting closer to a goal. Researchers say this type of learning takes a black-box approach because the system must learn everything about the world through trial and error.

It has been used effectively for contact-rich manipulation planning, where the robot seeks to learn the best way to move an object in a specified manner. But because there may be billions of potential contact points that a robot must reason about when determining how to use its fingers, hands, arms, and body to interact with an object, this trial-and-error approach requires a great deal of computation.

“Reinforcement learning may need to go through millions of years in simulation time to actually be able to learn a policy,” Suh adds.

On the other hand, if researchers specifically design a physics-based model using their knowledge of the system and the task they want the robot to accomplish, that model incorporates structure about this world that makes it more efficient. Yet physics-based approaches aren’t as effective as reinforcement learning when it comes to contact-rich manipulation planning — Suh and Pang wondered why. They conducted a detailed analysis and found that a technique known as smoothing enables reinforcement learning to perform so well.

Many of the decisions a robot could make when determining how to manipulate an object aren’t important in the grand scheme of things. For instance, each infinitesimal adjustment of one finger, whether or not it results in contact with the object, doesn’t matter very much. Smoothing averages away many of those unimportant, intermediate decisions, leaving a few important ones. Reinforcement learning performs smoothing implicitly by trying many contact points and then computing a weighted average of the results. Drawing on this insight, the MIT researchers designed a simple model that performs a similar type of smoothing, enabling it to focus on core robot-object interactions and predict long-term behavior. They showed that this approach could be just as effective as reinforcement learning at generating complex plans.

“If you know a bit more about your problem, you can design more efficient algorithms,” Pang says.

Even though smoothing greatly simplifies the decisions, searching through the remaining decisions can still be a difficult problem. So, the researchers combined their model with an algorithm that can rapidly and efficiently search through all possible decisions the robot could make. With this combination, the computation time was cut down to about a minute on a standard laptop. They first tested their approach in simulations where robotic hands were given tasks like moving a pen to a desired configuration, opening a door, or picking up a plate. In each instance, their model-based approach achieved the same performance as reinforcement learning, but in a fraction of the time. They saw similar results when they tested their model in hardware on real robotic arms.

“The same ideas that enable whole-body manipulation also work for planning with dexterous, human-like hands. Previously, most researchers said that reinforcement learning was the only approach that scaled to dexterous hands, but Terry and Tao showed that by taking this key idea of (randomized) smoothing from reinforcement learning, they can make more traditional planning methods work extremely well, too,” Tedrake says.

However, the model they developed relies on a simpler approximation of the real world, so it cannot handle very dynamic motions, such as objects falling. While effective for slower manipulation tasks, their approach cannot create a plan that would enable a robot to toss a can into a trash bin, for instance. In the future, the researchers plan to enhance their technique so it could tackle these highly dynamic motions.

Desktop fabrication of monolithic soft robotic devices with embedded fluidic control circuits

by Yichen Zhai, Albert De Boer, Jiayao Yan, Benjamin Shih, Martin Faber, Joshua Speros, Rohini Gupta, Michael T. Tolley in Science Robotics

The new soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work.

The device was developed by a team of roboticists at the University of California San Diego, in collaboration with researchers at the BASF corporation. The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects. No such gripper existed before this work.

“We designed functions so that a series of valves would allow the gripper to both grip on contact and release at the right time,” said Yichen Zhai, a postdoctoral researcher in the Bioinspired Robotics and Design Lab at the University of California San Diego and the leading author of the paper. “It’s the first time such a gripper can both grip and release. All you have to do is turn the gripper horizontally. This triggers a change in the airflow in the valves, making the two fingers of the gripper release.”

This fluidic logic allows the robot to remember when it has grasped an object and is holding on to it. When it detects the weight of the object pushing to the side, as it is rotating to the horizontal, it releases the object.

Soft robotics holds the promise of allowing robots to interact safely with humans and delicate objects. This gripper can be mounted on a robotic arm for industrial manufacturing applications, food production and the handling of fruits and vegetables. It can also be mounted onto a robot for research and exploration tasks. In addition, it can function untethered, with a bottle of high-pressure gas as its only power source. Most 3D-printed soft robots often have a certain degree of stiffness; contain a large number of leaks when they come off the printer; and need a fair amount of processing and assembly after printing in order to be usable.

The team overcame these obstacles by developing a new 3D printing method, which involves the printer nozzle tracing a continuous path through the entire pattern of each layer printed.

“It’s like drawing a picture without ever lifting the pencil off the page,” said Michael T. Tolley, the senior author on the paper and an associate professor in the UC San Diego Jacobs School of Engineering.

This method reduces the likelihood of leaks and defects in the printed piece, which are very common when printing with soft materials. The new method also allows for printing of thin walls, down to 0.5 millimeters in thickness. The thinner walls and complex, curved shapes allow for a higher range of deformation, resulting in a softer structure overall. Researchers based the method on the Eulerian path, which in graph theory is a trail in a graph that touches every edge of that graph once and once only.

“When we followed these rules, we were able to consistently print functional pneumatic soft robots with embedded control circuits,” said Tolley.

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--