RT/ Robot chef learns to ‘taste as you go’

Paradigm
Paradigm
Published in
30 min readMay 17, 2022

Robotics biweekly vol.50, 3d May — 17th May

TL;DR

  • A robot ‘chef’ has been trained to taste food at different stages of the chewing process to assess whether it’s sufficiently seasoned.
  • Researchers have developed a single-material, single-stimuli microstructure that can outmaneuver even living cilia. These programmable, micron-scale structures could be used for a range of applications, including soft robotics, biocompatible medical devices, and even dynamic information encryption.
  • Ever smaller and more intricate — without miniaturization, we wouldn’t have the components today that are required for high-performance laptops, compact smartphones or high-resolution endoscopes. Research is now being carried out in the nanoscale on switches, rotors or motors that comprise of only a few atoms in order to build what are known as molecular machines.
  • Researchers have shown it is possible to perform artificial intelligence using tiny nanomagnets that interact like neurons in the brain.
  • Robot-assisted surgery used to perform bladder cancer removal and reconstruction enables patients to recover far more quickly and spend significantly (20 per cent) less time in hospital, concludes a new clinical trial.
  • Fruit flies synchronize the movements of their heads and bodies to stabilize their vision and fly effectively, according to researchers who utilized virtual-reality flight simulators. The finding appears to hold true in primates and other animals, the researchers say, indicating that animals evolved to move their eyes and bodies independently to conserve energy and improve performance. This understanding could inform the design of advanced mobile robots.
  • Researchers have used a widespread species of blue-green algae to power a microprocessor continuously for a year — and counting — using nothing but ambient light and water. Their system has potential as a reliable and renewable way to power small devices.
  • Scientists have studied how the screen habits of US children correlates with how their cognitive abilities develop over time. They found that the children who spent an above-average time playing video games increased their intelligence more than the average, while TV watching or social media had neither a positive nor a negative effect.
  • Researchers at the City University of Hong Kong has developed a tiny drone based on the maple seed pod. In the paper, they describe how they used the maple seed pod as an inspiration for increasing flight time in under 100-gram drones.
  • Researchers at the Singapore University of Technology and Design (SUTD) have designed a new reconfigurable robot that could assist humans with cleaning and maintenance. This system is based on a heuristic approach that could ultimately also be used to create other reconfigurable robots.
  • Check out robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

Mastication-Enhanced Taste-Based Classification of Multi-Ingredient Dishes for Robotic Cooking

by Grzegorz Sochacki, Arsen Abdulali, Fumiya Iida in Frontiers in Robotics and AI

A robot ‘chef’ has been trained to taste food at different stages of the chewing process to assess whether it’s sufficiently seasoned.

Working in collaboration with domestic appliances manufacturer Beko, researchers from the University of Cambridge trained their robot chef to assess the saltiness of a dish at different stages of the chewing process, imitating a similar process in humans. Their results could be useful in the development of automated or semi-automated food preparation by helping robots to learn what tastes good and what doesn’t, making them better cooks.

Experiment overview. Nine dishes is prepared for robotic tasting. Each of the dishes is tasted by the robot before and after mixing. A set of taste metrics is then extracted from each tasting and used to train a test SVM classifier.

When we chew our food, we notice a change in texture and taste. For example, biting into a fresh tomato at the height of summer will release juices, and as we chew, releasing both saliva and digestive enzymes, our perception of the tomato’s flavour will change.

The robot chef, which has already been trained to make omelettes based on human taster’s feedback, tasted nine different variations of a simple dish of scrambled eggs and tomatoes at three different stages of the chewing process, and produced ‘taste maps’ of the different dishes.

The researchers found that this ‘taste as you go’ approach significantly improved the robot’s ability to quickly and accurately assess the saltiness of the dish over other electronic tasting technologies, which only test a single homogenised sample.

Experimental setup. UR5 robot is fitted with conductance sensor for saltiness tasting. Induction hob is used for cooking. Food is presented for tasting on a ceramic plate. The whole setup is controlled by a program run on a laptop.

The perception of taste is a complex process in humans that has evolved over millions of years: the appearance, smell, texture and temperature of food all affect how we perceive taste; the saliva produced during chewing helps carry chemical compounds in food to taste receptors mostly on the tongue; and the signals from taste receptors are passed to the brain. Once our brains are aware of the flavour, we decide whether we enjoy the food or not.

Taste is also highly individual: some people love spicy food, while others have a sweet tooth. A good cook, whether amateur or professional, relies on their sense of taste, and can balance the various flavours within a dish to make a well-rounded final product.

“Most home cooks will be familiar with the concept of tasting as you go — checking a dish throughout the cooking process to check whether the balance of flavours is right,” said Grzegorz Sochacki from Cambridge’s Department of Engineering, the paper’s first author. “If robots are to be used for certain aspects of food preparation, it’s important that they are able to ‘taste’ what they’re cooking.”

“When we taste, the process of chewing also provides continuous feedback to our brains,” said co-author Dr Arsen Abdulali, also from the Department of Engineering. “Current methods of electronic testing only take a single snapshot from a homogenised sample, so we wanted to replicate a more realistic process of chewing and tasting in a robotic system, which should result in a tastier end product.”

The researchers are members of Cambridge’s Bio-Inspired Robotics Laboratory run by Professor Fumiya Iida of the Department of Engineering, which focuses on training robots to carry out the so-called last metre problems which humans find easy, but robots find difficult. Cooking is one of these tasks: earlier tests with their robot ‘chef’ have produced a passable omelette using feedback from human tasters.

“We needed something cheap, small and fast to add to our robot so it could do the tasting: it needed to be cheap enough to use in a kitchen, small enough for a robot, and fast enough to use while cooking,” said Sochacki.

Figure showing the taste mapping of the same tomato scramble after mixing it to three different stages, with unmixed and “visually homogeneous” being the extreme cases.

To imitate the human process of chewing and tasting in their robot chef, the researchers attached a conductance probe, which acts as a salinity sensor, to a robot arm. They prepared scrambled eggs and tomatoes, varying the number of tomatoes and the amount of salt in each dish. Using the probe, the robot ‘tasted’ the dishes in a grid-like fashion, returning a reading in just a few seconds.

To imitate the change in texture caused by chewing, the team then put the egg mixture in a blender and had the robot test the dish again. The different readings at different points of ‘chewing’ produced taste maps of each dish. Their results showed a significant improvement in the ability of robots to assess saltiness over other electronic tasting methods, which are often time-consuming and only provide a single reading.

While their technique is a proof of concept, the researchers say that by imitating the human processes of chewing and tasting, robots will eventually be able to produce food that humans will enjoy and could be tweaked according to individual tastes.

“When a robot is learning how to cook, like any other cook, it needs indications of how well it did,” said Abdulali. “We want the robots to understand the concept of taste, which will make them better cooks. In our experiment, the robot can ‘see’ the difference in the food as it’s chewed, which improves its ability to taste.”

“Beko has a vision to bring robots to the home environment which are safe and easy to use,” said Dr Muhammad W. Chughtai, Senior Scientist at Beko plc. “We believe that the development of robotic chefs will play a major role in busy households and assisted living homes in the future. This result is a leap forward in robotic cooking, and by using machine and deep learning algorithms, mastication will help robot chefs adjust taste for different dishes and users.”

In future, the researchers are looking to improve the robot chef so it can taste different types of food and improve sensing capabilities so it can taste sweet or oily food, for example.

Self-regulated non-reciprocal motions in single-material microstructures

by Shucong Li, Michael M. Lerch, James T. Waters, Bolei Deng, Reese S. Martens, Yuxing Yao, Do Yoon Kim, Katia Bertoldi, Alison Grinthal, Anna C. Balazs, Joanna Aizenberg in Nature

For years, scientists have been attempting to engineer tiny, artificial cilia for miniature robotic systems that can perform complex motions, including bending, twisting, and reversing. Building these smaller-than-a-human-hair microstructures typically requires multi-step fabrication processes and varying stimuli to create the complex movements, limiting their wide-scale applications.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a single-material, single-stimuli microstructure that can outmaneuver even living cilia. These programmable, micron-scale structures could be used for a range of applications, including soft robotics, biocompatible medical devices, and even dynamic information encryption.

“Innovations in adaptive self-regulated materials that are capable of a diverse set of programmed motions represent a very active field, which is being tackled by interdisciplinary teams of scientists and engineers,” said Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS and senior author of the paper. “Advances achieved in this field may significantly impact the ways we design materials and devices for a variety of applications, including robotics, medicine and information technologies.”

Unlike previous research, which relied mostly on complex multi-component materials to achieve programmable movement of reconfigurable structural elements, Aizenberg and her team designed a microstructure pillar made of a single material — a photoresponsive liquid crystal elastomer. Because of the way the fundamental building blocks of the liquid crystal elastomer are aligned, when light hits the microstructure, those building blocks realign and the structure changes shape.

As this shape change occurs, two things happen. First, the spot where the light hits becomes transparent, allowing the light to penetrate further into the material, causing additional deformations. Second, as the material deforms and the shape moves, a new spot on the pillar is exposed to light, causing that area to also change shape. This feedback loop propels the microstructure into a stroke-like cycle of motion.

“This internal and external feedback loop gives us a self-regulating material. Once you turn the light on, it does all its own work,” said Shucong Li, a graduate student in the Department of Chemistry and Chemical Biology at Harvard and co-first author of the paper.

When the light turns off, the material snaps back to its original shape. The material’s specific twists and motions change with its shape, making these simple structures endlessly reconfigurable and tunable. Using a model and experiments, the researchers demonstrated the movements of round, square, L- and T-shaped, and palm-tree-shaped structures and laid out all the other ways the material can be tuned.

“We showed that we can program the choreography of this dynamic dance by tailoring a range of parameters, including illumination angle, light intensity, molecular alignment, microstructure geometry, temperature, and irradiation intervals and duration,” said Michael M. Lerch, a postdoctoral fellow in the Aizenberg Lab and co-first author of the paper.

To add another layer of complexity and functionality, the research team also demonstrated how these pillars interact with each other as part of an array.

“When these pillars are grouped together, they interact in very complex ways because each deforming pillar casts a shadow on its neighbor, which changes throughout the deformation process,” said Li. “Programming how these shadow-mediated self-exposures change and interact dynamically with each other could be useful for such applications as dynamic information encryption.”

“The vast design space for individual and collective motions is potentially transformative for soft robotics, micro-walkers, sensors, and robust information encryption systems,” said Aizenberg.

Photogearing as a concept for translation of precise motions at the nanoscale

by Aaron Gerwien, Frederik Gnannt, Peter Mayer, Henry Dube in Nature Chemistry

Ever smaller and more intricate — without miniaturization, we wouldn’t have the components today that are required for high-performance laptops, compact smartphones or high-resolution endoscopes. Research is now being carried out in the nanoscale on switches, rotors or motors that comprise of only a few atoms in order to build what are known as molecular machines. A research team at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has successfully built the world’s smallest energy powered gear wheel with corresponding counterpart. The nano gear unit is the first that can also be actively controlled and driven.

Miniaturization plays a key role in the further development of modern technologies and makes it possible to manufacture smaller devices that have more power. It also plays a significant role in manufacturing, since it allows materials and functional materials or medication to be produced at previously unprecedented levels of precision. Now, research has entered the nanoscale — which is invisible to the naked eye — focusing on individual atoms and molecules. The significance of this new field of research is demonstrated by the Nobel Prize for Chemistry, which was awarded for research into molecular machines in 2016.

Some important components used in molecular machines such as switches, rotors, forceps, robot arms or even motors already exist in the nanoscale. A further essential component for any machine is the gear wheel, which allows changes in direction and speed and enables movements to be connected to each other. Molecular counterparts also exist for gear wheels, however, up to now, they have only moved passively back and forth, which is not extremely useful for a molecular machine.

The molecular gear wheel developed by the research team led by Prof. Dr. Henry Dube, Chair of Organic Chemistry I at FAU and previously head of a junior research group at LMU in Munich, measures only 1.6 nm, which corresponds to around 50,000ths of the thickness of a human hair — the smallest of its kind. But that’s not all. The research team has succeeded in actively powering a molecular gear wheel and its counterpart and has thus solved a fundamental problem in the construction of machines on the nanoscale.

The gear unit comprises two components that are interlocked with each other and are made up of only 71 atoms. One component is a triptycene molecule whose structure is similar to a propeller or bucket wheel (shown in light gray in the animation). The second component is a flat fragment of a thioindigo molecule, similar to a small plate (shown in gold in the animation). If the plate rotates 180 degrees, the propeller rotates by only 120 degrees. The result is a 2:3 transmission ratio.

The nano gear unit is controlled by light, making it a molecular photogear. As they are directly driven by the light energy, the plate and the triptycene propeller move in locked synchronous rotation. Heat alone was not sufficient in order to make the gear unit rotate, as the FAU team discovered. When the researchers heated the solution around the gear unit in the dark, the propeller turned, but the plate did not — the gear “slipped.” The researchers thus came to the conclusion that the nano gear unit can be activated and controlled using a light source.

Reconfigurable training and reservoir computing in an artificial spin-vortex ice via spin-wave fingerprinting

by Jack C. Gartside, Kilian D. Stenning, Alex Vanstone, Holly H. Holder, Daan M. Arroo, Troy Dion, Francesco Caravelli, Hidekazu Kurebayashi, Will R. Branford in Nature Nanotechnology

Researchers have shown it is possible to perform artificial intelligence using tiny nanomagnets that interact like neurons in the brain.

The new method, developed by a team led by Imperial College London researchers, could slash the energy cost of artificial intelligence (AI), which is currently doubling globally every 3.5 months.

In a paper, the international team have produced the first proof that networks of nanomagnets can be used to perform AI-like processing. The researchers showed nanomagnets can be used for ‘time-series prediction’ tasks, such as predicting and regulating insulin levels in diabetic patients.

Artificial intelligence that uses ‘neural networks’ aims to replicate the way parts of the brain work, where neurons talk to each other to process and retain information. A lot of the maths used to power neural networks was originally invented by physicists to describe the way magnets interact, but at the time it was too difficult to use magnets directly as researchers didn’t know how to put data in and get information out.

Instead, software run on traditional silicon-based computers was used to simulate the magnet interactions, in turn simulating the brain. Now, the team have been able to use the magnets themselves to process and store data — cutting out the middleman of the software simulation and potentially offering enormous energy savings.

A map of the states of nanomagnets in one experiment.

Nanomagnets can come in various ‘states’, depending on their direction. Applying a magnetic field to a network of nanomagnets changes the state of the magnets based on the properties of the input field, but also on the states of surrounding magnets. The team, led by Imperial Department of Physics researchers, were then able to design a technique to count the number of magnets in each state once the field has passed through, giving the ‘answer’.

Co-first author of the study Dr Jack Gartside said: “We’ve been trying to crack the problem of how to input data, ask a question, and get an answer out of magnetic computing for a long time. Now we’ve proven it can be done, it paves the way for getting rid of the computer software that does the energy-intensive simulation.”

Co-first author Kilian Stenning added: “How the magnets interact gives us all the information we need; the laws of physics themselves become the computer.”

Team leader Dr Will Branford said: “It has been a long-term goal to realise computer hardware inspired by the software algorithms of Sherrington and Kirkpatrick. It was not possible using the spins on atoms in conventional magnets, but by scaling up the spins into nanopatterned arrays we have been able to achieve the necessary control and readout.”

AI is now used in a range of contexts, from voice recognition to self-driving cars. But training AI to do even relatively simple tasks can take huge amounts of energy. For example, training AI to solve a Rubik’s cube took the energy equivalent of two nuclear power stations running for an hour.

Much of the energy used to achieve this in conventional, silicon-chip computers is wasted in inefficient transport of electrons during processing and memory storage. Nanomagnets however don’t rely on the physical transport of particles like electrons, but instead process and transfer information in the form of a ‘magnon’ wave, where each magnet affects the state of neighbouring magnets. This means much less energy is lost, and that the processing and storage of information can be done together, rather than being separate processes as in conventional computers. This innovation could make nanomagnetic computing up to 100,000 times more efficient than conventional computing.

The team will next teach the system using real-world data, such as ECG signals, and hope to make it into a real computing device. Eventually, magnetic systems could be integrated into conventional computers to improve energy efficiency for intense processing tasks. Their energy efficiency also means they could feasibly be powered by renewable energy, and used to do ‘AI at the edge’ — processing the data where it is being collected, such as weather stations in Antarctica, rather than sending it back to large data centres. It also means they could be used on wearable devices to process biometric data on the body, such as predicting and regulating insulin levels for diabetic people or detecting abnormal heartbeats.

Complementary feedback control enables effective gaze stabilization in animals

by Benjamin Cellini, Wael Salem, Jean-Michel Mongeau in Proceedings of the National Academy of Sciences

Fruit flies synchronize the movements of their heads and bodies to stabilize their vision and fly effectively, according to Penn State researchers who utilized virtual-reality flight simulators. The finding appears to hold true in primates and other animals, the researchers say, indicating that animals evolved to move their eyes and bodies independently to conserve energy and improve performance. This understanding could inform the design of advanced mobile robots, according to principal investigator Jean-Michel Mongeau, assistant professor of mechanical engineering.

“We discovered that when controlling gaze, fruit flies minimize energy expenditure and increase flight performance,” Mongeau said. “And, using that coordination information, we developed a mathematical model that accurately predicts similar synchronization in [other] visually active animals.”

Researchers used high-speed cameras to record a fruit fly surrounded by LED video screens upon which the researchers projected footage of what a fly would see while in flight, creating an immersive virtual-reality experience and causing the fly to move as if freely flying.

“When a fly moves, it coordinates its head, wings and body to fly through the air, evade predators or look for food,” Mongeau said. “We were interested in studying how flies coordinate these movements, and we did so by simulating flight in virtual reality.”

Responding to both slow and fast visual motion in the virtual-reality flight simulator, the fly moved its head and body at different rates. The researchers took measurements and tracked the fly’s head movements to determine the direction of its gaze, since its eyes are fixed to its head and cannot move independently.

“We found that the fly’s head and body movements were complementary, in that the body moved most during slower visual motion, while the head moved most during faster motion,” Mongeau said. “The body and head working together helped stabilize the flight motion from very slow to very fast.”

Testing the concepts further, researchers immobilized the fly’s head and put it through the same visual stimuli. They found the fly could not respond to fast visual motion — demonstrating the advantage of complementary body and head movements.

“We found that the head and body working together is advantageous from an energy standpoint,” Mongeau said. “Since the head is smaller, it has less resistance to motion, or inertia, which means it can respond to quick movements, while the much larger body responds best to slower movement. Tuning these two components saves energy and increases performance not just for the fly, but also for other animals.”

Using control theory, a branch of engineering that deals with designing feedback systems like autopilots, the researchers compared the findings of the fly’s movements to other animals, including a classic study of primate movements.

“Using the same model, we looked at eye, head and body inertia ratios elsewhere in the animal kingdom, including in other insects, rats and birds,” Mongeau said. “The way the flies move their head and body is very similar to the way primates move their heads and eyes, which is remarkable since they diverged hundreds of millions of years ago.”

Just as a head is lighter than a body, eyes are lighter than a head and take less energy to move. According to Mongeau, independently moving eyes and heads marked the transition from water to land in the fossil record of vertebrates.

“As vertebrate animals transitioned from water to land more than 350 million years ago, the development of mechanisms to control head and eye movements could have had substantial evolutionary benefits,” Mongeau said. “We discovered that there is a sweet spot in eye-head-body ratios, suggesting that inertia may have been an important constraint in the evolution of vision.”

The researchers’ findings could be used to improve energy efficiency and performance in robotics, according to Benjamin Cellini, a mechanical engineering doctoral candidate and first author on the paper.

“In robotics, sensors are typically fixed in location,” Cellini said. “But in the animal kingdom, sensing and movement are coupled, as many physical sensors, like eyes, move. Inspired by biology, we can design more energy-efficient robots by making vision-based sensors mobile.”

Effect of Robot-Assisted Radical Cystectomy With Intracorporeal Urinary Diversion vs Open Radical Cystectomy on 90-Day Morbidity and Mortality Among Patients With Bladder Cancer

by James W. F. Catto, Pramit Khetrapal, Federico Ricciardi, et al. JAMA

Robot-assisted surgery used to perform bladder cancer removal and reconstruction enables patients to recover far more quickly and spend significantly (20 per cent) less time in hospital, concludes a first-of-its kind clinical trial led by scientists at UCL and the University of Sheffield.

The study found robotic surgery reduced the chance of readmission by half (52 per cent), and revealed a “striking” four-fold (77 per cent) reduction in prevalence of blood clots (deep vein thrombus & pulmonary emboli) — a significant cause of health decline and morbidity — when compared to patients who had open surgery. Patients’ physical activity — assessed by daily steps tracked on a wearable smart sensor — stamina and quality of life also increased.

Unlike open surgery, where a surgeon works directly on a patient and involves large incisions in the skin and muscle, robot-assisted surgery allows surgeons to guide minimally invasive instruments remotely using a console and aided by 3D view. It is currently only available in a small number of UK hospitals.

Distribution of Days Alive and Out of the Hospital Within 90 Days of Surgery According to Group.

Researchers say the findings provide the strongest evidence so far of the patient benefit of robot-assisted surgery and are now urging National Institute of Clinical Excellence (NICE) to make it available as a clinical option across the UK for all major abdominal surgeries including colorectal, gastro-intestinal, and gynaecological.Co-Chief Investigator, Professor John Kelly, Professor of Uro-Oncology at UCL’s Division of Surgery & Interventional Science and consultant surgeon at University College London Hospitals, said: “Despite robot-assisted surgery becoming more widely available, there has been no significant clinical evaluation of its overall benefit to patients’ recovery.

“In this study we wanted to establish if robot-assisted surgery, when compared to open surgery, reduced time spent in hospital, reduced readmissions, and led to better levels of fitness and quality of life; on all counts this was shown.

An unexpected finding was the striking reduction in blood clots in patients receiving robotic surgery; this indicates a safe surgery with patients benefiting from far fewer complications, early mobilisation and a quicker return to normal life.”

Co-Chief Investigator Professor James Catto, Professor of Urological Surgery at the Department of Oncology and Metabolism, University of Sheffield, said: “This is an important finding. Time in hospital is reduced and recovery is faster when using this advanced surgery.

“Ultimately, this will reduce bed pressures on the NHS and allow patients to return home more quickly. We see fewer complications from the improved mobility and less time spent in bed.

“The study also points to future trends in healthcare. Soon, we may be able to monitor recovery after discharge, to find those developing problems. It is possible that tracking walking levels would highlight those who need a district nurse visit or perhaps a check-up sooner in the hospital.”

“Previous trials of robotic surgery have focused on longer term outcomes. They have shown similar cancer cure rates and similar levels of long term recovery after surgery. None have looked at differences in the immediate days and weeks after surgery.”

Secondary Outcomes After Radical Cystectomy.

Open surgery remains the NICE “gold standard” recommendation for highly complex surgeries, though the research team hope this could change.

Professor Kelly added: “In light of the positive findings, the perception of open surgery as the gold standard for major surgeries is now being challenged for the first time.

“We hope that all eligible patients needing major abdominal operations can now be offered the option of having robotic surgery.”

Rebecca Porta, CEO of The Urology Foundation said: “The Urology Foundation’s mission is simple — to save lives and reduce the suffering caused by urological cancers and diseases. We do this through investing in cutting-edge research, leading education and supporting training of health care professionals to ensure that fewer lives will be devastated.

“We are proud to have been at the heart of the step change in the treatment and care for urology patients since our inception 27 years ago, and the outcomes of this trial will improve bladder cancer patients’ treatment and care.”

Bladder cancer is where a growth of abnormal tissue, known as a tumour, develops in the bladder lining. In some cases, the tumour spreads into the bladder muscle and can lead to secondary cancer in other parts of the body. About 10,000 people are diagnosed with bladder cancer in the UK every year and over 3,000 bladder removals and reconstructions are performed. It is one of the most expensive cancers to manage.

Across nine UK hospitals, 338 patients with non-metastatic bladder cancer were randomised into two groups: 169 patientshad robot-assisted radical cystectomy (bladder removal) with intracorporeal reconstruction (process of taking section of bowel to make new bladder), and 169 patients had open radical cystectomy. The trial’s primary end-point was length of stay in hospital post-surgery. On average, the robot-assisted group stayed eight days in hospital, compared to 10 days for the open surgery group — so a 20% reduction. Readmittance to hospital within 90 days of surgery was also significantly reduced — 21% for the robot-assisted group vs 32% for open.

A further 20 secondary outcomes were assessed at 90 days, six- and 12-months post-surgery. These included blood clot prevalence, wound complications, quality of life, disability, stamina, activity levels, and survival (morbidity). All secondary outcomes were improved by robot-assisted surgery or, if not improved, almost equal to open surgery. This study, and previous studies, show both robot-assisted and open surgery are equally as effective in regards cancer recurrence and length of survival. The research team is conducting a health economic analysis to establish the quality-adjusted life year (QALY), which incorporates the impact on both the quantity and quality of life.

Powering a microprocessor by photosynthesis

by P. Bombelli, A. Savanth, A. Scarampi, S. J. L. Rowden, D. H. Green, A. Erbe, E. Årstøl, I. Jevremovic, M. F. Hohmann-Marriott, S. P. Trasatti, E. Ozer, C. J. Howe in Energy & Environmental Science

Researchers have used a widespread species of blue-green algae to power a microprocessor continuously for a year — and counting — using nothing but ambient light and water. Their system has potential as a reliable and renewable way to power small devices

The system, comparable in size to an AA battery, contains a type of non-toxic algae called Synechocystis that naturally harvests energy from the sun through photosynthesis. The tiny electrical current this generates then interacts with an aluminium electrode and is used to power a microprocessor. The system is made of common, inexpensive and largely recyclable materials. This means it could easily be replicated hundreds of thousands of times to power large numbers of small devices as part of the Internet of Things. The researchers say it is likely to be most useful in off-grid situations or remote locations, where small amounts of power can be very beneficial.

“The growing Internet of Things needs an increasing amount of power, and we think this will have to come from systems that can generate energy, rather than simply store it like batteries,” said Professor Christopher Howe in the University of Cambridge’s Department of Biochemistry, joint senior author of the paper.

He added: “Our photosynthetic device doesn’t run down the way a battery does because it’s continually using light as the energy source.”

In the experiment, the device was used to power an Arm Cortex M0+, which is a microprocessor used widely in Internet of Things devices. It operated in a domestic environment and semi-outdoor conditions under natural light and associated temperature fluctuations, and after six months of continuous power production the results were submitted for publication.

“We were impressed by how consistently the system worked over a long period of time — we thought it might stop after a few weeks but it just kept going,” said Dr Paolo Bombelli in the University of Cambridge’s Department of Biochemistry, first author of the paper.

The algae does not need feeding, because it creates its own food as it photosynthesises. And despite the fact that photosynthesis requires light, the device can even continue producing power during periods of darkness. The researchers think this is because the algae processes some of its food when there’s no light, and this continues to generate an electrical current.

The Internet of Things is a vast and growing network of electronic devices — each using only a small amount of power — that collect and share real-time data via the internet. Using low-cost computer chips and wireless networks, many billions of devices are part of this network — from smartwatches to temperature sensors in power stations. This figure is expected to grow to one trillion devices by 2035, requiring a vast number of portable energy sources. The researchers say that powering trillions of Internet of Things devices using lithium-ion batteries would be impractical: it would need three times more lithium than is produced across the world annually. And traditional photovoltaic devices are made using hazardous materials that have adverse environmental effects.

The impact of digital media on children’s intelligence while controlling for genetic differences in cognition and socioeconomic background

by Bruno Sauce, Magnus Liebherr, Nicholas Judd, Torkel Klingberg in Scientific Reports

Researchers at Karolinska Institutet in Sweden have studied how the screen habits of US children correlates with how their cognitive abilities develop over time. They found that the children who spent an above-average time playing video games increased their intelligence more than the average, while TV watching or social media had neither a positive nor a negative effect.

Children are spending more and more time in front of screens. How this affects their health and whether it has a positive or negative impact on their cognitive abilities are hotly debated. For this present study, researchers at Karolinska Institutet and Vrije Universiteit Amsterdam specifically studied the link between screen habits and intelligence over time.

Path diagram of a strict measurement invariant Latent Change Score model with the change in intelligence from ages 9–10 to 11–12.

Over 9,000 boys and girls in the USA participated in the study. At the age of nine or ten, the children performed a battery of psychological tests to gauge their general cognitive abilities (intelligence). The children and their parents were also asked about how much time the children spent watching TV and videos, playing video games and engaging with social media. Just over 5,000 of the children were followed up after two years, at which point they were asked to repeat the psychological tests. This enabled the researchers to study how the children’s performance on the tests varied from the one testing session to the other, and to control for individual differences in the first test. They also controlled for genetic differences that could affect intelligence and differences that could be related to the parents’ educational background and income.

Density plot of time spent Gaming (raw values) between boys and girls at ages 9–10.

On average, the children spent 2.5 hours a day watching TV, half an hour on social media and 1 hour playing video games. The results showed that those who played more games than the average increased their intelligence between the two measurements by approximately 2.5 IQ points more than the average. No significant effect was observed, positive or negative, of TV-watching or social media.

“We didn’t examine the effects of screen behaviour on physical activity, sleep, wellbeing or school performance, so we can’t say anything about that,” says Torkel Klingberg, professor of cognitive neuroscience at the Department of Neuroscience, Karolinska Institutet. “But our results support the claim that screen time generally doesn’t impair children’s cognitive abilities, and that playing video games can actually help boost intelligence. This is consistent with several experimental studies of video-game playing.”

The results are also in line with recent research showing that intelligence is not a constant, but a quality that is influenced by environmental factors.

“We’ll now be studying the effects of other environmental factors and how the cognitive effects relate to childhood brain development,” says Torkel Klingberg.

One limitation of the study is that it only covered US children and did not differentiate between different types of video games, which makes the results difficult to transfer to children in other countries with other gaming habits. There was also a risk of reporting error since screen time and habits were self-rated.

A bioinspired revolving-wing drone with passive attitude stability and efficient hovering flight

by Songnan Bai et al. in Science Robotics

A trio of researchers at City University of Hong Kong has developed a tiny drone based on the maple seed pod. In their paper, Songnan Bai, Qingning He and Pakpong Chirarattananon, describe how they used the maple seed pod as an inspiration for increasing flight time in under 100-gram drones.

Maple seed pods are well known for their helicopter-type design. As they fall from the tree, they spin like a helicopter rotor with no engine, increasing their distance from the tree as they are blown afar. In this new effort, the researchers sought to take advantage of the efficiency inherent in the structure of the maple seed pod to increase flight time for tiny drones. To that end, they built a tiny drone that can spin like the maple seed pod to keep aloft. The resulting drone could fly for nearly twice as long as those with a traditional four-rotor design.

Most drones have spinning rotors to provide lift. This new design features two tiny rotors at the tips of the wings to make them spin — the lift comes courtesy of the spinning wings, which accounts for its improvements in efficiency. The researchers also added electronics and a battery at the center of the drone. The whole thing weighs less than 35 grams and spins at approximately 200 rpm. Testing showed it capable of hovering in the air for up to 24 minutes. The researchers note that due to the inherent stability of the design, no stabilizing microprocessor is needed. They also noted that they were able to realize position-controlled flight by manipulating the speed of the tiny rotors.

Image of the robot. Credit: Songnan Bai and Pakpong Chirarattananon

The researchers also tested the ability of the drone to carry a small payload, including a camera. Because the camera spins with the drone, they synched its framerate to coincide with the drone’s spin rate, producing a somewhat shaky but usable video feed. They also demonstrated its ability to carry a 21.5-gram device to conduct mapping and surveillance operations.

Reinforcement learning control for the swimming motions of a beaver-like, single-legged robot based on biological inspiration

by Gang Chen et al. in Robotics and Autonomous Systems

When developing new technologies, computer scientists and roboticists often draw inspiration from animals and other living organisms. This allows them to artificially replicate complex behaviors and locomotion patterns to enhance their systems’ performance, efficiency and capabilities. Researchers at Zhejiang Sci-Tech University and University of Essex have recently developed a reinforcement learning technique that can be used to control the movements of a beaver-inspired, single-legged robot. Their method allows the robot to autonomously learn how to perform swimming motions that resemble those observed in beavers.

“In this study, we introduce a biological-inspired reinforcement learning control method to model the motion of underwater robots,” Gang Chen, one of the researchers who carried out the study. “This method is mainly based on one of our previous works studying the motion of beavers.”

Underwater robots such as the one created by Chen and his colleagues are nonlinear systems, and their movements involve complex hydrodynamics. Accurately modeling their motion can thus be a very complex and challenging task that involves significant computing efforts.

Training controller structure of the robot.

In contrast with other models to guide the motion of underwater robots introduced in the past, the approach devised by Chen and his colleagues does not require the integration of complex motion models based on hydrodynamics. This is mainly because it relies on simplified joint angle representations that dynamically replicate the swimming motion of beavers. These joint representations make the model easier to train while also reducing the robot’s ineffective motions during training.

“By combining reinforcement learning with the mechanisms underpinning the swimming behavior of beavers, our method implements the robot’s swimming control as quickly and operably as possible,” Chen explained. “Its most notable and unique advantage is that it can avoid building complex motion control models and quickly realize the swimming control of a beaver-like, single-legged robot.”

Chen and his colleagues evaluated their beaver-inspired reinforcement learning-based method in a series of experiments, using a single-legged robotic platform. Their results were very promising, with their approach resulting in effective beaver-like swimming motions that improved the robot’s locomotion.

Actual controller of the robot after training.

In the future, the method introduced by this team of researchers could be used to improve the performance and movements of other one-legged robots designed to operate in water. In addition, their work could inspire the development of similar approaches to control the movements of other underwater robots.

“In our future work, we plan to improve the structure and performance of the beaver-like swimming robot,” Chen added. “We would also like to investigate ways to improve the intelligence behind robotic swimming motions using reinforcement learning, not only focusing on the robot’s swimming velocity, but also on swimming stability, trajectory planning, and obstacle avoidance, all within a real underwater environment.”

Videos

  • ABB Robotics has collaborated with two world-renowned artists — 8-year-old Indian child prodigy Advait Kolarkar and Dubai-based digital-design collective Illusorr — to create the world’s first robot-painted art car. ABB’s award-winning PixelPaint technology has, without human intervention, perfectly recreated Advait’s swirling, monochromatic design as well as Illusorr’s tricolor geometrical patterns.
  • Working closely with users and therapists, EPFL spin-off Emovo Care has developed a light and easy-to-attach hand exoskeleton for people unable to grasp objects following a stroke or accident. The device has been successfully tested in several hospitals and rehabilitation centers.
  • GlobalFoundries, a global semiconductor manufacturer, has turned to Spot to further automate their data collection for condition monitoring and predictive maintenance. Manufacturing facilities are filled with thousands of inspection points, and adding fixed sensors to all these assets is not economical. With Spot bringing the sensors to their assets, the team collects valuable information about the thermal condition of pumps and motors, as well as taking analog gauge readings.
  • The Langley Aerodrome №8 (LA-8) is a distributed-electric-propulsion, vertical-takeoff-and-landing (VTOL) aircraft that is being used for wind-tunnel testing and free-flight testing at the NASA Langley Research Center. The intent of the LA-8 project is to provide a low-cost, modular test bed for technologies in the area of advanced air mobility, which includes electric urban and short regional flight.

Upcoming events

ICRA 2022: 23–27 May 2022, Philadelphia

IEEE ARSO 2022: 28–30 May 2022, Long Beach

RSS 2022: 21–1 June 2022, New York

ERF 2022: 28–30 June 2022, Rotterdam, The Netherlands

ROBOCUP 2022: 11–17 July 2022, Bangkok, Thailand

IEEE CASE 2022: 20–24 August 2022, Mexico City, Mexico

CLAWAR 2022: 12–14 September 2022, Açores, Portugal

MISC

  • The May issue of Science Robotics is out:

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--