Paradigm
Published in

Paradigm

RT/ Self-learning robots adapt easily to changing circumstances

Robotics biweekly vol.31, 4th May — 18th May

TL;DR

  • Researchers from AMOLF’s Soft Robotic Matter group have shown that a group of small autonomous, self-learning robots can adapt easily to changing circumstances. They connected these simple robots in a line, after which each individual robot taught itself to move forward as quickly as possible. The results were published today in the scientific journal PNAS.
  • A team of researchers at Yale University’s Department of Mechanical Engineering and Materials Science, has developed a robot hand that employs a caging mechanism. In their paper published in the journal Science Robotics, the group describes their research into applying a caging mechanism to robot hands and how well their demonstration models worked.
  • A group of researchers from Osaka University developed a quadruped robot platform that can reproduce the neuromuscular dynamics of animals, discovering that a steady gait and experimental behaviors of walking cats emerged from the reflex circuit in walking experiments on this robot. Their research results were published in Frontiers in Neurorobotics.
  • MIT researchers have developed an algorithm that coordinates the performance of robot teams for missions like mapping or search-and-rescue in complex, unpredictable environments.
  • Robots are becoming more and more omnipresent in our lives, even though we may not notice. New research shows that when a boxy motorized hospital robot can talk, people find it funny and engaging. And that may help people be more willing to accept new technologies, like robots, in their everyday lives.
  • Multiply Labs is using robotic manufacturing platforms to create customized drug capsules for pharmaceutical companies and is expanding in cell therapy production.
  • NASA’s Ingenuity Mars Helicopter completed its fifth flight with a one-way journey from Wright Brothers Field to a new airfield 423 feet (129 meters) to the south on May 7, 2021.
  • A team led by Assistant Professor Benjamin Tee from the National University of Singapore has developed a smart material known as AiFoam that could give machines a human-like sense of touch, to better judge human intentions and respond to changes in the environment.
  • The 2021 Computer-Human Interaction conference (CHI) took place last week, and amongst the stereo smelling and tooth control were some incredibly creative robotics projects, like this “HairTouch” system that uses robotic manipulation to achieve a variety of haptic sensations using hair.
  • A cool thing from CHI: a “Pneumatic Raspberry Pi for Soft Robotics.”
  • Check out robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Researches

Continuous learning of emergent behavior in robotic matter

by Giorgio Oliveri el al. in PNAS

Researchers from AMOLF’s Soft Robotic Matter group have shown that a group of small autonomous, self-learning robots can adapt easily to changing circumstances. They connected these simple robots in a line, after which each individual robot taught itself to move forward as quickly as possible.

Robots are ingenious devices that can do an awful lot. There are robots that can dance and walk up and down stairs, and swarms of drones that can independently fly in a formation, just to name a few. However, all of those robots are programmed to a considerable extent — different situations or patterns have been planted in their brain in advance, they are centrally controlled, or a complex computer network teaches them behavior through machine learning. Bas Overvelde, Principal Investigator of the Soft Robotic Matter group at AMOLF, wanted to go back to the basics: a self-learning robot that is as simple as possible. “Ultimately, we want to be able to use self-learning systems constructed from simple building blocks, which for example only consist of a material like a polymer. We would also refer to these as robotic materials.”

The researchers succeeded in getting very simple, interlinked robotic carts that move on a track to learn how they could move as fast as possible in a certain direction. The carts did this without being programmed with a route or knowing what the other robotic carts were doing. “This is a new way of thinking in the design of self-learning robots. Unlike most traditional, programmed robots, this kind of simple self-learning robot does not require any complex models to enable it to adapt to a strongly changing environment,” explains Overvelde. “In the future, this could have an application in soft robotics, such as robotic hands that learn how different objects can be picked up or robots that automatically adapt their behavior after incurring damage.”

The self-learning system consists of several linked building blocks of a few centimeters in size, the individual robots. These robots consist of a microcontroller (a minicomputer), a motion sensor, a pump that pumps air into a bellows and a needle to let the air out. This combination enables the robot to breathe, as it were. If you link a second robot via the first robot’s bellows, they push each other away. That is what enables the entire robotic train to move. “We wanted to keep the robots as simple as possible, which is why we chose bellows and air. Many soft robots use this method,” says Ph.D. student Luuk van Laake.

The only thing that the researchers do in advance is to tell each robot a simple set of rules with a few lines of computer code (a short algorithm): switch the pump on and off every few seconds — this is called the cycle — and then try to move in a certain direction as quickly as possible. The chip on the robot continuously measures the speed. Every few cycles, the robot makes small adjustments to when the pump is switched on and determines whether these adjustments move the robotic train forward faster. Therefore, each robotic cart continuously conducts small experiments.

If you allow two or more robots to push and pull each other in this way, the train will move in a single direction sooner or later. Consequently, the robots learn that this is the better setting for their pump without the need to communicate and without precise programming on how to move forwards. The system slowly optimizes itself. The videos published with the article show how the train slowly but surely moves over a circular trajectory.

A Reciprocal Excitatory Reflex Between Extensors Reproduces the Prolongation of Stance Phase in Walking Cats: Analysis on a Robotic Platform

by Toyoaki Tanikawa et al. Frontiers in Neurorobotics

A group of researchers from Osaka University developed a quadruped robot platform that can reproduce the neuromuscular dynamics of animals, discovering that a steady gait and experimental behaviors of walking cats emerged from the reflex circuit in walking experiments on this robot.

It was thought that a steady gait in animals is generated by complex nerve systems in the brain and spinal marrow; however, recent research shows that a steady gait is produced by the reflex circuit alone. Scientists discovered a candidate of reflex circuit to generate the steady walking motion of cats, investigating locomotion mechanisms of cats by reproducing their motor control using robots and computer simulations.

Since experiments using animals are strictly controlled and restricted in terms of animal protection, it is difficult to study animal locomotion. So, it is still unknown how nerve systems discovered in prior research are integrated (i.e., how reflex circuits responsible for animal locomotion are integrated) in the animal body.

Toyoaki Tanikawa and his supervisors assistant professor YoichiMasudaand Prof MasatoIshikawa developed a four-legged robot that enables the reproduction of motor control of animals using computers. This quadruped robot, which comprises highly back-drivable legs to reproduce the flexibility of animals and torque-controllable motors, can reproduce muscle characteristics of animals. Thus, it is possible to conduct various experiments using this robot instead of the animals themselves.

By searching for the reflex circuit that contributes to the generation of a steady walking in cats through robotic experiments, the researchers found a simple reflex circuit that could produce leg trajectories and a steady gait pattern, which they named “reciprocal excitatory reflex between hip and knee extensors.”

In this study, the researchers found that:

  • The robot generated steady walking motions by simply reproducing the reciprocal circuit in each leg of the robot.
  • The robot’s gait became unstable when the reciprocal circuit was cut off.
  • When the mutual excitatory circuit was stimulated, the circuit produced a phenomenon called “prolongation of the stance phase.” This result suggests that this circuit is an important component responsible for walking in cats.

This group’s research results will benefit both the biology and robotics fields. In addition to bringing new knowledge to biology, if robotic animals could serve as a replacement for real animals in the future, it will give more scientists the chance to study the mechanisms of animal locomotion under various experimental conditions. Approximating a robot’s structure to that of an animal will lead to the development of fundamental technologies for making robots that move and maneuver as effectively as animals.

Co-author Yoichi Masuda says, “Gaining knowledge about animals without using experimental animals is also significant for the humans that live with them. Further combination of robotics and biology through the creation of robots that mimic the structures of animals and their locomotion could become the first step towards understanding the principles underlying the behaviors of animals and humans.”

Complex manipulation with a simple robotic hand through contact breaking and caging

by Walter G. Bircher et al. in Science Robotics

A team of researchers at Yale University’s Department of Mechanical Engineering and Materials Science, has developed a robot hand that employs a caging mechanism. In their paper published in the journal Science Robotics, the group describes their research into applying a caging mechanism to robot hands and how well their demonstration models worked.

As the researchers note, most robot hands do their work by manipulating objects with their fingertips. This approach allows for a certain degree of dexterity but is still far from that demonstrated by the human hand. In this new effort, the team at Yale noted that one of the factors that make the human hand able to handle objects with such dexterity is the use of the palm in conjunction with the fingers. They refer to such manipulations as caging — in which fingers on both sides of an object make use of a palm to form a cage of sorts when grasping an object.

To add caging to a robot’s capabilities, the researchers built a hand with two fingers situated opposite of two other fingers — all of the fingers also featured mid-finger knuckle joints that allowed them to bend around an object. The base of the hand served as a palm. The overall design is highly reminiscent of clam grapplers used by loggers, but with much more dexterity. The fingers are moved using six servo motors combined with wheels and pulleys.

The researchers tested the capabilities of the robot with tasks that grew increasingly difficult. The first involved using three cylinders and a cube to test the ability of the hand to hold and change the position of an object it was holding. Another involved changing the orientation of a held object. They also tested the ability of the hand to grasp different objects in the traditional way, by simply squeezing gently and lifting. They then moved on to more difficult tasks, such as grasping a mustard bottle, spinning it upside down and squeezing it to eject a small amount of the condiment. They also tested the ability of the hand to transfer power from grasping to pinching an orange without losing its grip. More testing showed the hand able to juggle Baoding balls.

The researchers conclude by suggesting their design opens the door to new kinds of robotic hands that will provide more capabilities than those now in use.

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams

by Xiaoyi Cai, Jonathan How, Brent Schlotfeldt and George J. Pappas

Consider a search-and-rescue mission to find a hiker lost in the woods. Rescuers might want to deploy a squad of wheeled robots to roam the forest, perhaps with the aid of drones scouring the scene from above. The benefits of a robot team are clear. But orchestrating that team is no simple matter. How to ensure the robots aren’t duplicating each other’s efforts or wasting energy on a convoluted search trajectory? MIT researchers have designed an algorithm to ensure the fruitful cooperation of information-gathering robot teams. Their approach relies on balancing a tradeoff between data collected and energy expended — which eliminates the chance that a robot might execute a wasteful maneuver to gain just a smidgeon of information.

The researchers say this assurance is vital for robot teams’ success in complex, unpredictable environments. “Our method provides comfort, because we know it will not fail, thanks to the algorithm’s worst-case performance,” says Xiaoyi Cai, a PhD student in MIT’s Department of Aeronautics and Astronautics (AeroAstro).

The research will be presented at the IEEE International Conference on Robotics and Automation in May.

Robot teams have often relied on one overarching rule for gathering information: The more the merrier. “The assumption has been that it never hurts to collect more information,” says Cai. “If there’s a certain battery life, let’s just use it all to gain as much as possible.” This objective is often executed sequentially — each robot evaluates the situation and plans its trajectory, one after another. It’s a straightforward procedure, and it generally works well when information is the sole objective. But problems arise when energy efficiency becomes a factor.

Cai says the benefits of gathering additional information often diminish over time. For example, if you already have 99 pictures of a forest, it might not be worth sending a robot on a miles-long quest to snap the 100th. “We want to be cognizant of the tradeoff between information and energy,” says Cai. “It’s not always good to have more robots moving around. It can actually be worse when you factor in the energy cost.”

The researchers developed a robot team planning algorithm that optimizes the balance between energy and information. The algorithm’s “objective function,” which determines the value of a robot’s proposed task, accounts for the diminishing benefits of gathering additional information and the rising energy cost. Unlike prior planning methods, it doesn’t just assign tasks to the robots sequentially. “It’s more of a collaborative effort,” says Cai. “The robots come up with the team plan themselves.”

Cai’s method, called Distributed Local Search, is an iterative approach that improves the team’s performance by adding or removing individual robot’s trajectories from the group’s overall plan. First, each robot independently generates a set of potential trajectories it might pursue. Next, each robot proposes its trajectories to the rest of the team. Then the algorithm accepts or rejects each individual’s proposal, depending on whether it increases or decreases the team’s objective function. “We allow the robots to plan their trajectories on their own,” says Cai. “Only when they need to come up with the team plan, we let them negotiate. So, it’s a rather distributed computation.”

Distributed Local Search proved its mettle in computer simulations. The researchers ran their algorithm against competing ones in coordinating a simulated team of 10 robots. While Distributed Local Search took slightly more computation time, it guaranteed successful completion of the robots’ mission, in part by ensuring that no team member got mired in a wasteful expedition for minimal information. “It’s a more expensive method,” says Cai. “But we gain performance.”

The advance could one day help robot teams solve real-world information gathering problems where energy is a finite resource, according to Geoff Hollinger, a roboticist at Oregon State University, who was not involved with the research. “These techniques are applicable where the robot team needs to trade off between sensing quality and energy expenditure. That would include aerial surveillance and ocean monitoring.”

Cai also points to potential applications in mapping and search-and-rescue — activities that rely on efficient data collection. “Improving this underlying capability of information gathering will be quite impactful,” he says. The researchers next plan to test their algorithm on robot teams in the lab, including a mix of drones and wheeled robots.

Building robots to expand access to cell therapies

Alumni-founded Multiply Labs uses an automated manufacturing platform to produce advanced treatments at scale

Over the last two years, Multiply Labs has helped pharmaceutical companies produce biologic drugs with its robotic manufacturing platform. The robots can work around the clock, precisely formulating small batches of drugs to help companies run clinical trials more quickly. Now Multiply Labs, which was founded by Fred Parietti PhD ’16 and former visiting PhD at MIT Alice Melocchi, is hoping to bring the speed and precision of its robots to a new type of advanced treatment. In a recently announced project, Multiply Labs is developing a new robotic manufacturing platform to ease bottlenecks in the creation of cell therapies. These therapies have proven to be a powerful tool in the fight against cancer, but their production is incredibly labor intensive, contributing to their high cost. CAR-T cell therapy, for example, requires scientists to extract blood from a patient, isolate immune cells, genetically engineer those cells, grow the new cells, and inject them back into the patient. In many cases, each of those steps must be repeated for each patient.

Multiply Labs is attempting to automate many processes that can currently only be done by highly trained scientists, reducing the potential for human error. The platform will also perform some of the most time-consuming tasks of cell therapy production in parallel. For instance, the company’s system will contain multiple bioreactors, which are used to grow the genetically modified cells that will be injected back into the patient. Some labs today only use one bioreactor in each clean room because of the specific environmental conditions that have to be met to optimize cell growth. By running multiple reactors simultaneously in a space about a quarter of the size of a basketball court, the company believes it can multiply the throughput of cell therapy production.

Multiply Labs has partnered with global life sciences company Cytiva, which provides cell therapy equipment and services, as well as researchers at the University of California San Francisco to bring the platform to market.

Multiply Labs’ efforts come at a time when demand for cell therapy treatment is expected to explode: There are currently more than 1,000 clinical trials underway to explore the treatment’s potential in a range of diseases. In the few areas where cell therapies are already approved, they have helped cancer patients when other treatment options had failed.

“These [cell therapy] treatments are needed by millions of people, but only dozens of them can be administered by many centers,” Parietti says. “The real potential we see is enabling pharmaceutical compan ies toget these treatments approved and manufactured quicker so they can scale to hundreds of thousands — or millions — of patients.”

Multiply Labs’ move into cell therapy is just the latest pivot for the company. The original idea for the startup came from Melocchi, who was a visiting PhD candidate in MIT’s chemical engineering department in 2013 and 2014. Melocchi had been creating drugs by hand in the MIT-Novartis Center for Continuous Manufacturing when she toured Parietti’s space at MIT. Parietti was building robotic limbs for factory workers and people with disabilities at the time, and his workspace was littered with robotic appendages and 3-D printers. Melocchi saw the machines as a way to make personalized drug capsules.

Parietti developed the first robotic prototype in the kitchen of his Cambridge apartment, and the founders received early funding from the MIT Sandbox Innovation Fund Program.

After going through the Y Combinator startup accelerator, the founders realized their biggest market would be pharmaceutical companies running clinical trials. Early trials often involve testing drugs of different potencies.

“Every clinical trial is essentially personalized, because drug developers don’t know the right dosage,” Parietti says.

Today Multiply Labs’ robotic clusters are being deployed on the production floors of leading pharmaceutical companies. The cloud-based platforms can produce 30,000 drug capsules a day and are modular, so companies can purchase as many systems as they need and run them together. Each system is contained in 15 square feet.

“Our goal is to be the gold standard for the manufacturing of individualized drugs,” Parietti says. “We believe the future of medicine is going to be individualized drugs made on demand for single patients, and the only way to make those is with robots.”

Multiply Labs robots handle each step of the drug formulation process.

The move to cell therapy comes after Parietti’s small team of mostly MIT-trained roboticists and engineers spent the last two years learning about cell therapy production separately from its drug capsule work. Earlier this month, the company raised $20 million and is expecting to triple its team.

Multiply labs is already working with Cytiva to incorporate the company’s bioreactors into its platform.

“[Multiply Labs’] automation has broad implications for the industry that include expanding patient access to existing treatments and accelerating the next generation of treatments,” says Cytiva’s Parker Donner, the company’s head of business development for cell and gene therapy.

Multiply Labs aims to ship a demo to a cell therapy manufacturing facility at UCSF for clinical validation in the next nine months.

“It really is a great adventure for someone like me, a physician-scientist, to interact with mechanical engineers and see how they think and solve problems,” says Jonathan Esensten, an assistant adjunct professor at UCSF whose research group is being sponsored by Multiply Labs for the project. “I think they have complementary ways of approaching problems compared to my team, and I think it’s going to lead to great things. I’m hopeful we’ll build technologies that push this field forward and bend the cost curve to allow us to do things better, faster, and cheaper. That’s what we need if these really exciting therapies are going to be made widely available.”

Social domestication of service robots: The secret lives of Automated Guided Vehicles (AGVs) at a Norwegian hospital

by Roger A. Søraa et al. in International Journal of Human-Computer Studies

Robots are becoming more and more omnipresent in our lives, even though we may not notice. New research shows that when a boxy motorized hospital robot can talk, people find it funny and engaging. And that may help people be more willing to accept new technologies, like robots, in their everyday lives.

No one expected the “Automated Guided Vehicles” at St. Olavs Hospital to have personalities. These motorized units, like long boxes on wheels, are merely meant to transport garbage, medical equipment or food from one part of the hospital to another. But because they have to interact with humans, by warning them to get out of the way, they have to talk.

Instead of using a generic Norwegian voice, they decided to give the hospital robots a voice that used the strong, distinctive local dialect. Suddenly these stainless-steel boxes, rolling around the hospital to transport goods, had a personality.

Children with long-term illnesses who were attending school in the hospital during their treatments were given an assignment to find and identify them. One parent with a gravely ill child found solace in the robots’ endless, somewhat mindless battles as they unsuccessfully ordered inanimate objects — like walls — to get out of the way.

In a new study, NTNU researchers examine just how the robots came to be seen as friendly, animal like creatures — and why that matters.

“We found that these robots, which were not created to be social robots, were actually given social qualities by the humans relating to them,” said Roger A. Søraa, a researcher at NTNU’s Department of Interdisciplinary Studies of Culture and Department of Neuromedicine and Movement Science, and first author of the new study. “We tend to anthropomorphize technologies like robots — giving them humanlike personalities — so we can put them into a context that we’re more comfortable with.”

Health care, especially in hospitals, involves lots of specialized skills and expertise. Whether it’s nurses administering to patients around the clock, or doctors providing critical surgery or other highly specialized care, there’s plenty of work for people to do in a hospital setting.

So why not leave some of the more mundane work — say, moving food from the cafeteria to different hospital units, or bringing clean linens to nursing stations — to an automated industrial robot?

“These are types of jobs that can often be dull, dirty or dangerous, or what we call 3D jobs,” Søraa said. “These are jobs that humans don’t necessarily want to do or like to do. And those are the jobs we are seeing becoming robotized or digitalized the fastest.”

That’s exactly what St. Olavs Hospital in Trondheim decided in 2006, when they brought 21 AGVs, made by Swisslog Healthcare, into the hospital to do some very basic lifting and moving work. St. Olavs was the first hospital in Scandinavia to adopt this technology.

Since then, the robots have been driving around the hospital’s halls, following pre-defined routes among different pick-up and delivery points, using lasers to navigate. They also have sensors that enable them to avoid people, obstacles and dangerous situations. They even take elevators — much to the annoyance of hospital staff — but more about this later.

And because they share the same areas as humans, they can say a few sentences when needed.

That, it turns out, is one of the key things that helps transform a metallic industrial box on wheels into a friendly animal-like creature with a personality.

Videos

NASA’s Ingenuity Mars Helicopter completed its fifth flight with a one-way journey from Wright Brothers Field to a new airfield 423 feet (129 meters) to the south on May 7, 2021.

A team led by Assistant Professor Benjamin Tee from the National University of Singapore has developed a smart material known as AiFoam that could give machines human-like sense of touch, to better judge human intentions and respond to changes in the environment.

The 2021 Computer-Human Interaction conference (CHI) took place last week, in amongst the stereo smelling and tooth control were some incredibly creative robotics projects, like this “HairTouch” system that uses robotic manipulation to achieve a variety of haptic sensations using hair.

Here’s another cool thing from CHI: a “Pneumatic Raspberry Pi for Soft Robotics.”

Upcoming events

ICRA 2021 — May 30–5, 2021 — [Online Event]
RoboCup 2021 — June 22–28, 2021 — [Online Event]
DARPA SubT Finals — September 21–23, 2021 — Louisville, KY, USA
WeRobot 2021 — September 23–25, 2021 — Coral Gables, FL, USA
IROS 2021 — September 27–1, 2021 — [Online Event]
ROSCon 20201 — October 21–23, 2021 — New Orleans, LA, USA

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store