RT/ Giving soft robots feeling

Paradigm
Paradigm
Published in
25 min readJun 11, 2020

--

Robotics biweekly vol.6, 28th May — 11th June

TL;DR

  • In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers.
  • Dubbed HAMR-JR, a new microrobot is a half-scale version of the cockroach-inspired Harvard Ambulatory Microrobot or HAMR. About the size of a penny, HAMR-JR can perform almost all of the feats of its larger-scale predecessor, making it one of the most dexterous microrobots to date.
  • Roboticists have developed flexible feet that can help robots walk up to 40 percent faster on uneven terrain such as pebbles and wood chips. The work has applications for search-and-rescue missions as well as space exploration.
  • Researchers have developed a technology called ‘Artificial Chemist,’ which incorporates artificial intelligence and an automated system for performing chemical reactions to accelerate R&D and manufacturing of commercially desirable materials.
  • Scientists have made artificial cilia, or hair-like structures, that can bend into new shapes in response to a magnetic field, then return to their original shape when exposed to the proper light source.
  • Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control, and includes robust AI algorithms that allow the software to better account for uncertainty.
  • Artificial intelligence should be used to expand the role of chest X-ray imaging — using computed tomography, or CT — in diagnosing and assessing coronavirus infection so that it can be more than just a means of screening for signs of COVID-19 in a patient’s lungs, say researchers in a new report.
  • A landmark review of the role of artificial intelligence (AI) in the future of global health calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.
  • Next generation of soft robots inspired by a children’s toy.
  • Researchers train a robotic arm to make a fluffy omelette — from cracking eggs to plating up.
  • Robot dog hounds Thai shoppers to keep hands virus-free.
  • The next Mars rover launches next month, find some of the instruments on board below (video).
  • ICRA 2020, the world’s biggest virtual robotics conference, kicked off last Sunday with an all-star panel on a critical topic: “COVID-19: How Can Roboticists Help?” Watch other ICRA keynotes on IEEE.tv.
  • Check out robotics upcoming events (mostly virtual) below. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Research articles

Giving soft robots feeling

In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers

One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.

“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.

One paper builds off last year’s research from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.

To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.

When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.

“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”

In a second paper, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).

The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.

“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”

Magic ball senses

The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.

While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.

When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.

In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.

The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.

Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.

Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.

GelFlex

In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.

To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.

The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.

During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.

In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.

Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.

Next-generation cockroach-inspired robot is small but mighty

This itsy-bitsy robot can’t climb up the waterspout yet but it can run, jump, carry heavy payloads and turn on a dime. Dubbed HAMR-JR, this microrobot developed by researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Harvard Wyss Institute for Biologically Inspired Engineering, is a half-scale version of the cockroach-inspired Harvard Ambulatory Microrobot or HAMR.

About the size of a penny, HAMR-JR can perform almost all of the feats of its larger-scale predecessor, making it one of the most dexterous microrobots to date.

“Most robots at this scale are pretty simple and only demonstrate basic mobility,” said Kaushik Jayaram, a former postdoctoral fellow at SEAS and Wyss and first author of the paper. “We have shown that you don’t have to compromise dexterity or control for size.”

Jayaram is currently an Assistant Professor at the University of Colorado, Boulder. The research was presented virtually at the International Conference on Robotics and Automation (ICRA 2020) this week.

One of the big questions going into this research was whether or not the pop-up manufacturing process used to build previous versions of HAMR and other microbots, including the RoboBee, could be used to build robots at multiple scales — from tiny surgical bots to large-scale industrial robots.

PC-MEMS (short for printed circuit microelectromechanical systems) is a fabrication process in which the robot’s components are etched into a 2D sheet and then popped out in its 3D structure. To build HAMR-JR, the researchers simply shrunk the 2D sheet design of the robot — along with the actuators and onboard circuitry — to recreate a smaller robot with all the same functionalities.

“The wonderful part about this exercise is that we did not have to change anything about the previous design,” said Jayaram. “We proved that this process can be applied to basically any device at a variety of sizes.”

HAMR-JR comes in at 2.25 centimeters in body length and weighs about 0.3 grams — a fraction of the weight of an actual penny. It can run about 14 body lengths per second, making it not only one of the smallest but also one of the fastest microrobots.

Scaling down does change some of the principles governing things like stride length and joint stiffness, so the researchers also developed a model that can predict locomotion metrics like running speeds, foot forces, and payload based on a target size. The model can then be used to design a system with the required specifications.

“This new robot demonstrates that we have a good grasp on the theoretical and practical aspects of scaling down complex robots using our folding-based assembly approach,” said co-author Robert Wood, Charles River Professor of Engineering and Applied Sciences in SEAS and Core Faculty Member of the Wyss.

This research was co-authored by Jennifer Shum, Samantha Castellanos and E. Farrell Helbling. This research was supported by DARPA and the Wyss Institute.

These flexible feet help robots walk faster

Roboticists at the University of California San Diego have developed flexible feet that can help robots walk up to 40 percent faster on uneven terrain such as pebbles and wood chips. The work has applications for search-and-rescue missions as well as space exploration.

Says Emily Lathrop, the paper’s first author and a Ph.D. student at the Jacobs School of Engineering at UC San Diego:

“Robots need to be able to walk fast and efficiently on natural, uneven terrain so they can go everywhere humans can go, but maybe shouldn’t”

The researchers will present their findings at the RoboSoft conference which takes place virtually May 15 to July 15, 2020.

“Usually, robots are only able to control motion at specific joints,” said Michael T. Tolley, a professor in the Department of Mechanical and Aerospace Engineering at UC San Diego and senior author of the paper. “In this work, we showed that a robot that can control the stiffness, and hence the shape, of its feet outperforms traditional designs and is able to adapt to a wide variety of terrains.”

The feet are flexible spheres made from a latex membrane filled with coffee grounds. Structures inspired by nature? such as plant roots? and by human-made solutions? such as piles driven into the ground to stabilize slopes? are embedded in the coffee grounds.

The feet allow robots to walk faster and grip better because of a mechanism called granular jamming that allows granular media, in this case the coffee grounds, to go back and forth between behaving like a solid and behaving like a liquid. When the feet hit the ground, they firm up, conforming to the ground underneath and providing solid footing. They then unjam and loosen up when transitioning between steps. The support structures help the flexible feet remain stiff while jammed.

It’s the first time that such feet have been tested on uneven terrain, like gravel and wood chips.

The feet were installed on a commercially available hexapod robot. Researchers designed and built an on-board system that can generate negative pressure to control the jamming of the feet, as well as positive pressure to unjam the feet between each step. As a result, the feet can be actively jammed, with a vacuum pump removing air from between the coffee grounds and stiffening the foot. But the feet also can be passively jammed, when the weight of the robot pushes the air out from between the coffee grounds inside, causing them to stiffen.

Researchers tested the robot walking on flat ground, wood chips and pebbles, with and without the feet. They found that passive jamming feet perform best on flat ground but active jamming feet do better on loose rocks. The feet also helped the robot’s legs grip the ground better, increasing its speed. The improvements were particularly significant when the robot walked up sloped, uneven terrain.

“The natural world is filled with challenging grounds for walking robots — slippery, rocky, and squishy substrates all make walking complicated,” said Nick Gravish, a professor in the UC San Diego Department of Mechanical and Aerospace Engineering and study coauthor. “Feet that can adapt to these different types of ground can help robots improve mobility.”

In a companion paper co-authored by Tolley and Gravish with Ph.D. student Shivan Chopra as first author, researchers quantified exactly how much improvement each foot generated. For example, the foot reduced by 62 percent the depth of penetration in the sand on impact; and reduced by 98 percent the force required to pull the foot out when compared to a fully rigid foot.

Next steps include incorporating soft sensors on the bottom of the feet to allow an electronic control board to identify what kind of ground the robot is about to step on and whether the feet need to be jammed actively or passively.

Researchers will also keep working to improve design and control algorithms to make the feet more efficient.

Photothermally Reconfigurable Shape Memory Magnetic Cilia

by Jessica A.‐C. Liu Benjamin A. Evans Joseph B. Tracy

Researchers have made artificial cilia, or hair-like structures, that can bend into new shapes in response to a magnetic field, then return to their original shape when exposed to the proper light source.

Stimulus‐responsive polymers are attractive for microactuators because they can be easily miniaturized and remotely actuated, enabling untethered operation. In this work, magnetic Fe microparticles are dispersed in a thermoplastic polyurethane shape memory polymer matrix and formed into artificial, magnetic cilia by solvent casting within the vertical magnetic field in the gap between two permanent magnets. Interactions of the magnetic moments of the microparticles, aligned by the applied magnetic field, drive self‐assembly of magnetic cilia along the field direction. The resulting magnetic cilia are reconfigurable using light and magnetic fields as remote stimuli. Temporary shapes obtained through combined magnetic actuation and photothermal heating can be locked by switching off the light and magnetic field. Subsequently turning on the light without the magnetic field drives recovery of the permanent shape. The permanent shape can also be reprogrammed after preparing the cilia by applying mechanical constraints and annealing at high temperature. Spatially controlled actuation is demonstrated by applying a mask for optical pattern transfer into the array of magnetic cilia. A theoretical model is developed for predicting the response of shape memory magnetic cilia and elucidates physical mechanisms behind observed phenomena, enabling the design and optimization of ciliary systems for specific applications.

“This work expands the capabilities of magnetic cilia and our understanding of their behaviors, which has potential applications in soft robotics, including microrobotics,” says Joe Tracy, corresponding author of a paper on the work and a professor of materials science and engineering at NC State. “A key point of this work is that we’ve demonstrated shape memory magnetic cilia whose shape can be set, locked, unlocked and reconfigured. This property will be useful for enhanced and new applications.”

The finding builds on the team’s earlier research designing soft robots that could be controlled using magnets and light. However, there are significant departures from the previous work.

“The cilia are actuated by magnetic torques, which means the cilia rotate and align with the field from an inexpensive permanent magnet, instead of being pulled toward the magnet,” says Ben Evans, co-author of the paper and a professor of physics at Elon. “Actuation of the soft robots in our earlier work relied on magnetic field gradients, which moved the robot by pulling it. The new approach offers another tool for designing soft robots.”

Artificial Chemist: An Autonomous Quantum Dot Synthesis Bot

by Robert W. Epps Michael S. Bowen Amanda A. Volk Kameel Abdel‐Latif Suyong Han Kristofer G. Reyes Aram Amassian Milad Abolhasani

Researchers have developed a technology called ‘Artificial Chemist,’ which incorporates artificial intelligence and an automated system for performing chemical reactions to accelerate R&D and manufacturing of commercially desirable materials.

The optimal synthesis of advanced nanomaterials with numerous reaction parameters, stages, and routes, poses one of the most complex challenges of modern colloidal science, and current strategies often fail to meet the demands of these combinatorially large systems. In response, an Artificial Chemist is presented: the integration of machine‐learning‐based experiment selection and high‐efficiency autonomous flow chemistry. With the self‐driving Artificial Chemist, made‐to‐measure inorganic perovskite quantum dots (QDs) in flow are autonomously synthesized, and their quantum yield and composition polydispersity at target bandgaps, spanning 1.9 to 2.9 eV, are simultaneously tuned. Utilizing the Artificial Chemist, eleven precision‐tailored QD synthesis compositions are obtained without any prior knowledge, within 30 h, using less than 210 mL of total starting QD solutions, and without user selection of experiments. Using the knowledge generated from these studies, the Artificial Chemist is pre‐trained to use a new batch of precursors and further accelerate the synthetic path discovery of QD compositions, by at least twofold. The knowledge‐transfer strategy further enhances the optoelectronic properties of the in‐flow synthesized QDs (within the same resources as the no‐prior‐knowledge experiments) and mitigates the issues of batch‐to‐batch precursor variability, resulting in QDs averaging within 1 meV from their target peak emission energy.

Says Milad Abolhasani, corresponding author of a paper on the work and an assistant professor of chemical and biomolecular engineering at NC State:

“Artificial Chemist is a truly autonomous system that can intelligently navigate through the chemical universe,” Currently, Artificial Chemist is designed for solution-processed materials — meaning it works for materials that can be made using liquid chemical precursors. Solution-processed materials include high-value materials such as quantum dots, metal/metal oxide nanoparticles, metal organic frameworks (MOFs), and so on.

The Artificial Chemist is similar to a self-driving car, but a self-driving car at least has a finite number of routes to choose from in order to reach its pre-selected destination. With Artificial Chemist, you give it a set of desired parameters, which are the properties you want the final material to have. Artificial Chemist has to figure out everything else, such as what the chemical precursors will be and what the synthetic route will be, while minimizing the consumption of those chemical precursors.

The end result is a fully autonomous materials development technology that not only helps you find the ideal solution-processed material more quickly than any techniques currently in use, but it does so using tiny amounts of chemical precursors. That significantly reduces waste and makes the materials development process much less expensive.”

How Might AI and Chest Imaging Help Unravel COVID-19’s Mysteries?

by Shinjini Kundu , Hesham Elhalawani, Judy W. Gichoya, Charles E. Kahn, Jr

Artificial intelligence should be used to expand the role of chest X-ray imaging — using computed tomography, or CT — in diagnosing and assessing coronavirus infection so that it can be more than just a means of screening for signs of COVID-19 in a patient’s lungs, say researchers in a new report.

Within the study, published in the May issue of Radiology: Artificial Intelligence, the researchers say that:

“AI’s power to generate models from large volumes of information — fusing molecular, clinical, epidemiological and imaging data — may accelerate solutions to detect, contain and treat COVID-19.”

Although CT chest imaging is not currently a routine method for diagnosing COVID-19 in patients, it has been helpful in excluding other possible causes for COVID-like symptoms, confirming a diagnosis made by another means or providing critical data for monitoring a patient’s progress in severe cases of the disease. The Johns Hopkins Medicine researchers believe this isn’t enough, making the case that there is “an untapped potential” for AI-enhanced imaging to improve. They suggest the technology can be used for:

  • Risk stratification, the process of categorizing patients for the type of care they receive based on the predicted course of their COVID-19 infection.
  • Treatment monitoring to define the effectiveness of agents used to combat the disease.
  • Modeling how COVID-19 behaves, so that novel, customized therapies can be developed, tested and deployed.

For example, the researchers propose that “AI may help identify the immunological markers most associated with poor clinical course, which may yield new targets” for drugs that will direct the immune system against the SARS-CoV-2 virus that causes COVID-19.

Artificial intelligence and the future of global health

by Nina Schwalbe, Brian Wahl in The Lancet

A landmark review of the role of artificial intelligence (AI) in the future of global health calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.

Advances in information technology infrastructure and mobile computing power in many low and middle-income countries (LMICs) have raised hopes that AI could help to address challenges which are unique to the field of global health and accelerate the achievement of the health-related Sustainable Development Goals (SDGs) and Universal Health Coverage (UHC). However, the deployment of AI-enabled interventions must be exercised with care and caution for individuals and societies to benefit equally, especially in the current context of the digital tools and systems being rapidly deployed in response to the novel coronavirus disease 2019 (COVID-19).

“Especially during the COVID-19 emergency, we cannot ignore what we know about the importance of human-centered design and gender bias of algorithms,” said Schwalbe. “Thinking through how AI interventions will be adapted within the context of the health systems in which they are deployed must be part of every study.”

“This review marks an important point in our rapidly developing digital age at which to reflect on the impressive opportunities that AI may hold, but also consider what we are urgently missing to protect those most at risk — exciting developments but many are being rolled out without adequate evidence or appropriate safeguards” said Dr. Naomi Lee, Senior Executive Editor at The Lancet.

According to Wahl and Schwalbe, artificial intelligence is already being used in high-resource settings to address COVID-19 response activities, including patient risk assessment and managing patient flowThey point out, however, that while artificial intelligence could support the COVID-19 response in resource-limited settings, there are currently few mechanisms to ensure its appropriate use in such settings.

As the field of AI is rapidly evolving in global health, and in light of the COVID-19 response, the review highlights the following recommendations:

  • Incorporate aspects of human-centered design into the development process, including starting from a needs-based rather than a tool-based approach;
  • Ensure rapid and equitable access to representative datasets;
  • Establish global systems for assessing and reporting efficacy and effectiveness of AI-driven interventions in global health;
  • Develop a research agenda that includes implementation and system-related questions on the deployment of new AI-driven interventions;
  • Develop and implement global regulatory, economic, and ethical standards and guidelines that safeguard the interests of LMICs.

Schwalbe and Wahl developed these recommendations through an extensive review of the peer-reviewed literature to help ensure that AI helps to improve health in LMICs and contribute to the achievement of the SDGs and UHC, to the COVID-19 response.

“In the eye of the COVID-19 storm, now more than ever we must be vigilant to apply regulatory, ethical, and data protection standards. We hold ourselves to ethical standards around proving interventions work before we roll them out at scale. Without this, we risk undermining the vulnerable populations we are best trying to support” said Schwalbe.

The review was supported by Fondation Botnar, a Swiss-based foundation that champions the use of AI and digital technology to improve the health and wellbeing of children and young people in growing urban environments.

“We are proud to have supported this critical and timely review,” said Stefan Germann, CEO of Fondation Botnar. “In anticipation of the adoption of the new WHO Global Strategy on Digital Health later this year, and the rapid deployment of technologies in response to COVID-19, we need to raise the discussions on the human rights issues and necessary governance structures around data use and sharing, and the role of institutions such as the WHO in providing leadership.”

Environmental Context Prediction for Lower Limb Prostheses With Uncertainty Quantification

by Boxuan Zhong; Rafael Luiz da Silva; Minhan Li; He Huang; Edgar Lobaton et al.

Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control, and includes robust artificial intelligence (AI) algorithms that allow the software to better account for uncertainty.

Reliable environmental context prediction is critical for wearable robots (e.g., prostheses and exoskeletons) to assist terrain-adaptive locomotion. This article proposed a novel vision-based context prediction framework for lower limb prostheses to simultaneously predict human’s environmental context for multiple forecast windows. By leveraging the Bayesian neural networks (BNNs), our framework can quantify the uncertainty caused by different factors (e.g., observation noise, and insufficient or biased training) and produce a calibrated predicted probability for online decision-making. Researchers compared two wearable camera locations (a pair of glasses and a lower limb device), independently and conjointly. They utilized the calibrated predicted probability for online decision-making and fusion. Researchers demonstrated how to interpret deep neural networks with uncertainty measures and how to improve the algorithms based on the uncertainty analysis. The inference time of the framework on a portable embedded system was less than 80 ms/frame. The results in this study may lead to novel context recognition strategies in reliable decision-making, efficient sensor fusion, and improved intelligent system design in various applications.

Says Boxuan Zhong, lead author of the paper and a recent Ph.D. graduate from NC State:

“We found that the model can be appropriately transferred so the system can operate with subjects from different populations,” Lobaton says. “That means that the AI worked well even thought it was trained by one group of people and used by somebody different.”

However, the new framework has not yet been tested in a robotic device.

“We are excited to incorporate the framework into the control system for working robotic prosthetics — that’s the next step. And we’re also planning to work on ways to make the system more efficient, in terms of requiring less visual data input and less data processing”.

Next generation of soft robots inspired by a children’s toy

Buckling, the sudden loss of structural stability, is usually the stuff of engineering nightmares. Mechanical buckling means catastrophic failure for every structural system from rockets to soufflés. It’s what caused the Deepwater Horizon oil spill in 2010, among numerous other disasters.

But, as anyone who has ever played with a toy popper knows, buckling also releases a lot of energy. When the structure of a popper buckles, the energy released by the instability sends the toy flying through the air. Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Harvard’s Wyss Institute for Biologically Inspired Engineering have harnessed that energy and used buckling to their advantage to build a fast-moving, inflatable soft actuator.

The research is published in Science Robotics.

“Soft robots have enormous potential for a wide spectrum of applications, ranging from minimally invasive surgical tools and exoskeletons to warehouse grippers and video game add-ons,” said Benjamin Gorissen, a postdoctoral fellow at SEAS and co-first author of the paper. “But applications for today’s soft actuators are limited by their speed.”

Fluidic soft actuators tend to be slow to power up and move because they need a lot of fluid to work and the flow, whether gas or liquid, is restricted by tubes and valves in the device.

“In this work, we showed that we can harness elastic instabilities to overcome this restriction, enabling us to decouple the slow input from the output and make a fast-jumping fluidic soft actuator,” said David Melancon, a graduate student at SEAS and co-first author of the paper.

“This actuator is a building block that could be integrated into a fully soft robotic system to give soft robots that can already crawl, walk and swim the ability to jump,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. “By incorporating our jumper into these designs, these robots could navigate safely through uncharted landscapes.”

Bertoldi is also an Associate Faculty member of the Wyss Institute.

The researchers relied on the same type of buckling that propels toy poppers, known as shell buckling. The team designed the actuators with two spherical caps — essentially two poppers — nestled together like Russian nesting dolls and connected at the base. Upon inflation, pressure builds up between the two caps. The thinner outer cap expands up while the thicker inner cap buckles and collapses, hitting the ground and catapulting the device into the air.

While the device seems simple, understanding the fundamental physics at play was paramount to controlling and optimizing the robot’s performance. Most previous research into shell buckling studied how to avoid it but Gorissen, Melancon and the rest of the team wanted to increase the instability.

As fate would have it, one of the pioneers of shell buckling research sits just two floors down from Bertoldi’s team in Pierce Hall. Professor Emeritus John W. Hutchinson, who joined the Harvard faculty in 1964, developed some of the first theories to characterize and quantify the buckling shell structures.

“Our research shines a different perspective on some of [Hutchinson’s] theories and that enables us to apply them to a different research domain,” said Gorissen.

“It was nice to be able to get feedback from one of the pioneers in the field,” said Melancon. “He developed the theory to prevent failure and now we’re using it to trigger buckling.”

Using established theories as well as more recent numerical simulation tools, the researchers were able to characterize and tune the pressure volume relationship between the two shells to develop a soft robot capable of quickly releasing a specific amount of energy over and over again. The approach can be applied to any shape and any size. It could be used in everything from a small medical device to puncture a vein or in large exploratory robots to traverse uneven terrain.

The research was co-authored by Nikolaos Vasios and Mehdi Torbati. It was supported in part by the National Science Foundation through grants DMR-1420570 and DMR-1922321.

Videos

The next Mars rover launches next month (!), and here’s a look at some of the instruments on board:

Embodied Lead Engineer, Peter Teel, describes why we chose to build Moxie’s computing system from scratch and what makes it so unique:

How many eggs does a robot have to crack to make an omelette? A team of researchers at the University of Cambridge have tried to find out by training a robotic arm to make and plate the breakfast dish using machine learning. From cracking the eggs and adding seasoning to whisking and pouring, the engineers were able to teach the robot how to create a fluffy plain omelette.

‘An omelette is one of those dishes that is easy to make, but difficult to make well,’ said Dr Fumiya Iida from Cambridge’s Department of Engineering, who led the research. ‘We thought it would be an ideal test to improve the abilities of a robot chef, and optimise for taste, texture, smell and appearance’

Get a closer look at the Virtual competition of the Urban Circuit and how teams can use the simulated environments to better prepare for the physical courses of the Subterranean Challenge.

This video shows an impressive demo of how YuMi’s superior precision, using precise servo gripper fingers and vacuum suction tool to pick up extremely small parts inside a mechanical watch. The video is not a final application used in production, it is a demo of how such an application can be implemented.

Upcoming events

ICRA 2020 — June 01, 2020 — [Virtual Conference] ICRA 2020, the world’s biggest virtual robotics conference, kicked off last Sunday with an all-star panel on a critical topic: “COVID-19: How Can Roboticists Help?” Watch other ICRA keynotes on IEEE.tv.

RSS 2020 — July 12–16, 2020 — [Virtual Conference]
CLAWAR 2020 — August 24–26, 2020 — Moscow, Russia
ICUAS 2020 — September 1–4, 2020 — Athens, Greece
ICRES 2020 — September 28–29, 2020 — Taipei, Taiwan
ICSR 2020 — November 14–16, 2020 — Golden, Colorado

MISC

Robot dog hounds Thai shoppers to keep hands virus-free: A scurrying robot dog named K9 dispenses hand sanitizer to curious children and wary shoppers — one of the more unexpected measures Thai malls are taking as the kingdom relaxes virus restrictions.

The hi-tech hound is controlled using 5G, a technology promising super-fast internet speeds with immediate reaction times that is still in the initial stages of roll out in Thailand.

Mimicking an excited puppy, K9 roams around the popular Central World mall in downtown Bangkok, drawing the attention of delighted children eager to get gel from a bottle attached to its back.

Subscribe to detailed companies’ updates by Paradigm!

Medium. Twitter. Telegram. Reddit.

Main sources

Research articles

Science Robotics

Science Daily

--

--