RT/ New robotic system assesses mobility after stroke

Paradigm
Paradigm
Published in
29 min readDec 1, 2023

Robotics biweekly vol.86, 16th November- 1st December

TL;DR

  • Stroke is a leading cause of long-term disability worldwide. Each year more than 15 million people worldwide have strokes, and three-quarters of stroke survivors will experience impairment, weakness and paralysis in their arms and hands. Many stroke survivors rely on their stronger arm to complete daily tasks, from carrying groceries to combing their hair, even when the weaker arm has the potential to improve. Researchers have developed a novel robotic system for collecting precise data on how people recovering from stroke use their arms spontaneously.
  • A novel technology to manage demands on mobile networks from multiple users using Terahertz frequencies has been developed by computer scientists.
  • Researchers taught an autonomous excavator to construct dry stone walls itself using boulders weighing several tons and demolition debris.
  • Tandem solar cells based on perovskite semiconductors convert sunlight to electricity more efficiently than conventional silicon solar cells. In order to make this technology ready for the market, further improvements with regard to stability and manufacturing processes are required. Researchers have succeeded in finding a way to predict the quality of the perovskite layers and consequently that of the resulting solar cells: Assisted by Machine Learning and new methods in AI, it is possible assess their quality from variations in light emission already in the manufacturing process.
  • Scientists have shown that placing physical constraints on an artificially-intelligent system — in much the same way that the human brain has to develop and operate within physical and biological constraints — allows it to develop features of the brains of complex organisms in order to solve tasks.
  • Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers have now studied whether generative AI tools such as ChatGPT can help with this step.
  • For the first time, researchers have succeeded in printing a robotic hand with bones, ligaments and tendons made of different polymers using a new laser scanning technique. The new technology makes it possible to 3D print special plastics with elastic qualities in one go. This opens up completely new possibilities for the production of soft robotic structures.
  • The new work by perception researchers is the first to demonstrate that people can tell what others are trying to learn just by watching their actions. The study reveals a key yet neglected aspect of human cognition, and one with implications for artificial intelligence.
  • A new AI software is now able to decipher difficult-to-read texts on cuneiform tablets. Instead of photos, the AI system uses 3D models of the tablets, delivering significantly more reliable results than previous methods. This makes it possible to search through the contents of multiple tablets to compare them with each other. It also paves the way for entirely new research questions.
  • A research team used 125 physical markers to understand the detailed mechanics of 44 different human facial motions. The aim was to better understand how to convey emotions with artificial faces. Beyond helping with the design of robots and androids, this research can also benefit computer graphics, facial recognition, and medical diagnoses.
  • And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars). Source: Statista

Latest News & Research

A metric for characterizing the arm nonuse workspace in poststroke individuals using a robot arm

by Nathaniel Dennler, Amelia Cain, Erica De Guzmann, Claudia Chiu, Carolee J. Winstein, Stefanos Nikolaidis, Maja J. Matarić in Science Robotics

Stroke is a leading cause of long-term disability worldwide. Each year more than 15 million people worldwide have strokes, and three-quarters of stroke survivors will experience impairment, weakness and paralysis in their arms and hands.

Many stroke survivors rely on their stronger arm to complete daily tasks, from carrying groceries to combing their hair, even when the weaker arm has the potential to improve. Breaking this habit, known as “arm nonuse” or “learned nonuse,” can improve strength and prevent injury. But, determining how much a patient is using their weaker arm outside of the clinic is challenging. In a classic case of observer’s paradox, the measurement has to be covert for the patient to behave spontaneously.

Now, USC researchers have developed a novel robotic system for collecting precise data on how people recovering from stroke use their arms spontaneously. Using a robotic arm to track 3D spatial information, and machine learning techniques to process the data, the method generates an “arm nonuse” metric, which could help clinicians accurately assess a patient’s rehabilitation progress. A socially assistive robot (SAR) provides instructions and encouragement throughout the challenge.

“Ultimately, we are trying to assess how much someone’s performance in physical therapy transfers into real life,” said Nathan Dennler, the paper’s lead author and a computer science doctoral student.

The research involved combined efforts from researchers in USC’s Thomas Lord Department of Computer Science and the Division of Biokinesiology and Physical Therapy. “This work brings together quantitative user-performance data collected using a robot arm, while also motivating the user to provide a representative performance thanks to a socially assistive robot,” said Maja Matari?, study co-author and Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience, and Pediatrics. “This novel combination can serve as a more accurate and more motivating process for stroke patient assessment.”

Lead author Nathan Dennler, a computer science doctoral student, with the robotic arm, which provides precise 3D spatial information, and a socially assistive robot, which gives instruction and motivation throughout the assessment. Photo/Nathan Dennler.

For the study, the research team recruited 14 participants who were right-hand dominant before the stroke. The participant placed their hands on the device’s home position — a 3D-printed box with touch sensors. A socially assistive robot (SAR) described the system’s mechanics and provided positive feedback, while the robot arm moved a button to different target locations in front of the participant (100 locations in total). The “reaching trial” begins when the button lights up, and the SAR cues the participant to move.

In the first phase, the participants were directed to reach for the button using whichever hand came naturally, mirroring everyday use. In the second phase, they were instructed to use the stroke-affected arm only, mirroring performance in physiotherapy or other clinical settings. Using machine learning, the team analyzed three measurements to determine a metric for arm nonuse: arm use probability, time to reach, and successful reach. A noticeable difference in performance between the phases would suggest nonuse of the affected arm.

“The participants have a time limit to reach the button, so even though they know they’re being tested, they still have to react quickly,” said Dennler. “This way, we’re measuring gut reaction to the light turning on — which hand will you use on the spot?”

In chronic stroke survivors, the researchers observed high variability in hand choice and in the time to reach targets in the workspace. The method was reliable across repeated sessions, and participants rated it as simple to use, with above-average user experience scores. All participants found the interaction to be safe and easy to use. Crucially, the researchers found differences in arm use between participants, which could be used by healthcare professionals to more accurately track a patient’s stroke recovery.

“For example, one participant whose right side was more affected by their stroke exhibited lower use of their right arm specifically in areas higher on their right side, but maintained a high probability of using their right arm for lower areas on the same side,” said Dennler.

“Another participant exhibited more symmetric use but also compensated with their less-affected side slightly more often for higher-up points that were close to the mid-line.”

Participants felt that the system could be improved through personalization, which the team hopes to explore in future studies, in addition to incorporating other behavioral data such as facial expressions and different types of tasks. As a physiotherapist, Cain said the technology addresses many issues encountered with traditional methods of assessment, which “require the patient not to know they’re being tested, and are based on the tester’s observation which can leave more room for error.”

“This type of technology could provide rich, objective information about a stroke survivor’s arm use to their rehabilitation therapist,” said Cain. “The therapist could then integrate this information into their clinical decision-making process and better tailor their interventions to address the patient’s areas of weakness and build upon areas of strength.”

MDD-Enabled Two-Tier Terahertz Fronthaul in Indoor Industrial Cell-Free Massive MIMO

by Bohan Li, Diego Dupleich, Guoqing Xia, Huiyu Zhou, Yue Zhang, Pei Xiao, Lie-Liang Yang in IEEE Transactions on Communications

A novel technology to manage demands on mobile networks from multiple users using Terahertz frequencies has been developed by University of Leicester computer scientists.

As we see an explosion of devices joining the ‘internet of things’, this solution could not only improve speed and power consumption for users of mobile devices, but could also help reap the benefits from the next generation of mobile technologies, 6G.

Demands on the UK’s mobile telecommunications network are growing, with Mobile UK estimating that twenty-five million devices are connected to mobile networks, a number expected to rise to thirty billion by 2030. As the ‘internet of things’ grows, more and more technology will be competing for access to those networks.

State-of-the-art telecommunication technologies have been established for current applications in 5G, but with increasing demands of more users and devices, these systems demonstrate slower connections and costly energy consumption. These systems suffer from the self-interference problem that severely affects communication quality and efficiency. To deal with these challenges, a technique known as multicarrier-division duplex (MDD) has been recently proposed and studied, which allows a receiver in the network to be nearly free of self-interference in the digital domain by relying only on the fast Fourier transform (FFT) processing.

(a) A practical indoor industrial scenario, where the CPU is installed on the roof, while all the APs are installed on the wall or pillars. (b) The corresponding ray-tracing environment. (с) Power delay profile measured at AP 8 receiver. (d) Performance comparison among different fronthaul schemes versus the THz bandwidth in practical indoor industrial scenario.

This project proposed a novel technology to optimise the assignment of subcarrier set and the number of access point clusters and improve the communication quality in different networks. The team tested their technology in a simulation based on a real-world industrial setting, finding that it out-performed existing technologies. A 10% power consumption reduction can be achieved, compared to other state of the art technologies.

Lead Principal Investigator Professor Huiyu Zhou from the University of Leicester School of Computing and Mathematical Sciences said: “With our proposed technology, 5G/6G systems require less energy consumption, have faster device selection and less resource allocation. Users may feel their mobile communication is quicker, wider and with reduced power demands.

“The University of Leicester is leading the development of AI solutions for device selection and access point clustering. AI technologies, reinforcement learning in particular, help us to search for the best parameters used in the proposed wireless communication systems quickly and effectively. This helps to save power, resources and human labour. Without using AI technologies, we will spend much more time on rendering the best parameters for system set-up and device selection in the network.”

The team is now continuing work on the optimising the proposed technologies and reducing the computational complexity of the technique. The source code of the proposed method has been published and shared with the entire world for promoting the research.

A framework for robotic excavation and dry stone construction using on-site materials

by Ryan Luke Johns, Martin Wermelinger, Ruben Mascaro, Dominic Jud, Ilmar Hurkxkens, Lauren Vasey, Margarita Chli, Fabio Gramazio, Matthias Kohler, Marco Hutter in Science Robotics

Until today, dry stone wall construction has involved vast amounts of manual labour. A multidisciplinary team of ETH Zurich researchers developed a method of using an autonomous excavator to construct a dry-​stone wall that is six metres high and sixty-​five metres long. Dry stone walls are resource efficient as they use locally sourced materials, such as concrete slabs that are low in embodied energy.

ETH Zurich researchers deployed an autonomous excavator, called HEAP, to build a six metre-high and sixty-five-metre-long dry-stone wall. The wall is embedded in a digitally planned and autonomously excavated landscape and park. The team of researchers included: Gramazio Kohler Research, the Robotics Systems Lab, Vision for Robotics Lab, and the Chair of Landscape Architecture. They developed this innovative design application as part of the National Centre of Competence in Research for Digital Fabrication (NCCR dfab).

Using sensors, the excavator can autonomously draw a 3D map of the construction site and localise existing building blocks and stones for the wall’s construction. Specifically designed tools and machine vision approaches enable the excavator to scan and grab large stones in its immediate environment. It can also register their approximate weight as well as their centre of gravity.

An algorithm determines the best position for each stone, and the excavator then conducts the task itself by placing the stones in the desired location. The autonomous machine can place 20 to 30 stones in a single consignment — about as many as one delivery could supply.

Discovering Process Dynamics for Scalable Perovskite Solar Cell Manufacturing with Explainable AI

by Lukas Klein, Sebastian Ziegler, Felix Laufer, Charlotte Debus, Markus Götz, Klaus Maier‐Hein, Ulrich W. Paetzold, Fabian Isensee, Paul F. Jäger in Advanced Materials

Tandem solar cells based on perovskite semiconductors convert sunlight to electricity more efficiently than conventional silicon solar cells. In order to make this technology ready for the market, further improvements with regard to stability and manufacturing processes are required. Researchers of Karlsruhe Institute of Technology (KIT) and of two Helmholtz platforms — Helmholtz Imaging at the German Cancer Research Center (DKFZ) and Helmholtz AI — have succeeded in finding a way to predict the quality of the perovskite layers and consequently that of the resulting solar cells: Assisted by Machine Learning and new methods in Artificial Intelligence (AI), it is possible assess their quality from variations in light emission already in the manufacturing process.

Perovskite tandem solar cells combine a perovskite solar cell with a conventional solar cell, for example based on silicon. These cells are considered a next-generation technology: They boast an efficiency of currently more than 33 percent, which is much higher than that of conventional silicon solar cells. Moreover, they use inexpensive raw materials and are easily manufactured. To achieve this level of efficiency, an extremely thin high-grade perovskite layer, whose thickness is only a fraction of that of human hair, has to be produced. “Manufacturing these high-grade, multi-crystalline thin layers without any deficiencies or holes using low-cost and scalable methods is one of the biggest challenges,” says tenure-track professor Ulrich W. Paetzold who conducts research at the Institute of Microstructure Technology and the Light Technology Institute of KIT. Even under apparently perfect lab conditions, there may be unknown factors that cause variations in semiconductor layer quality: “This drawback eventually prevents a quick start of industrial-scale production of these highly efficient solar cells, which are needed so badly for the energy turnaround,” explains Paetzold.

To find the factors that influence coating, an interdisciplinary team consisting of the perovskite solar cell experts of KIT has joined forces with specialists for Machine Learning and Explainable Artificial Intelligence (XAI) of Helmholtz Imaging and Helmholtz AI at the DKFZ in Heidelberg. The researchers developed AI methods that train and analyze neural networks using a huge dataset. This dataset includes video recordings that show the photoluminescence of the thin perovskite layers during the manufacturing process. Photoluminescence refers to the radiant emission of the semiconductor layers that have been excited by an external light source. “Since even experts could not see anything particular on the thin layers, the idea was born to train an AI system for Machine Learning (Deep Learning) to detect hidden signs of good or poor coating from the millions of data items on the videos,” Lukas Klein and Sebastian Ziegler from Helmholtz Imaging at the DKFZ explain.

To filter and analyze the widely scattered indications output by the Deep Learning AI system, the researchers subsequently relied on methods of Explainable Artificial Intelligence.

Assisted by AI methods, researchers are striving to improve the manufacturing processes for highly efficient perovskite solar cells (Photo: Amadeus Bramsiepe, KIT)

The researchers found out experimentally that the photoluminescence varies during production and that this phenomenon has an influence on the coating quality. “Key to our work was the targeted use of XAI methods to see which factors have to be changed to obtain a high-grade solar cell,” Klein and Ziegler say. This is not the usual approach. In most cases, XAI is only used as a kind of guardrail to avoid mistakes when building AI models. “This is a change of paradigm: Gaining highly relevant insights in materials science in such a systematic way is a totally new experience.” It was indeed the conclusion drawn from the photoluminescence variation that enabled the researchers to take the next step. After the neural networks had been trained accordingly, the AI was able to predict whether each solar cell would achieve a low or a high level of efficiency based on which variation of light emission occurred at what point in the manufacturing process.

“These are extremely exciting results,” emphasizes Ulrich W. Paetzold. “Thanks to the combined use of AI, we have a solid clue and know which parameters need to be changed in the first place to improve production. Now we are able to conduct our experiments in a more targeted way and are no longer forced to look blindfolded for the needle in a haystack. This is a blueprint for follow-up research that also applies to many other aspects of energy research and materials science.”

Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

by Jascha Achterberg, Danyal Akarca, D. J. Strouse, John Duncan, Duncan E. Astle in Nature Machine Intelligence

Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system — in much the same way that the human brain has to develop and operate within physical and biological constraints — allows it to develop features of the brains of complex organisms in order to solve tasks.

As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.

Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”

Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”

Task structure and seRNNs.

In a study, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains. Instead of real neurons, the system used computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.

In their system, however, the researchers applied a ‘physical’ constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organised.

The researchers gave the system a simple task to complete — in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point. One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements — start location, end location and intermediate steps — and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.

Functional clustering and distribution of coding in space.

Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.

With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.

When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs — highly connected nodes that act as conduits for passing information across the network.

More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.

Co-author Professor Duncan Astle, from Cambridge’s Department of Psychiatry, said: “This simple constraint — it’s harder to wire nodes that are far apart — forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.”

The team are hopeful that their AI system could begin to shed light on how these constraints, shape differences between people’s brains, and contribute to differences seen in those that experience cognitive or mental health difficulties. Co-author Professor John Duncan from the MRC CBSU said: “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”

Achterberg added: “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.”

The findings are likely to be of interest to the AI community, too, where they could allow for the development of more efficient systems, particularly in situations where there are likely to be physical constraints.

Dr Akarca said: “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”

Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their works shows that the type of problem the AI is solving will influence which architecture is the most powerful to use.

Achterberg said: “If you want to build an artificially-intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”

This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours.

Achterberg added: “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”

Leveraging Large Language Models for Decision Support in Personalized Oncology

by Manuela Benary, Xing David Wang, Max Schmidt, Dominik Soll, Georg Hilfenhaus, Mani Nassir, Christian Sigler, Maren Knödler, Ulrich Keller, Dieter Beule, Ulrich Keilholz, Ulf Leser, Damian T. Rieke in JAMA Network Open

Treating cancer is becoming increasingly complex, but also offers more and more possibilities. After all, the better a tumor’s biology and genetic features are understood, the more treatment approaches there are. To be able to offer patients personalized therapies tailored to their disease, laborious and time-consuming analysis and interpretation of various data is required. Researchers at Charité — Universitätsmedizin Berlin and Humboldt-Universität zu Berlin have now studied whether generative artificial intelligence (AI) tools such as ChatGPT can help with this step. This is one of many projects at Charité analyzing the opportunities unlocked by AI in patient care.

If the body can no longer repair certain genetic mutations itself, cells begin to grow unchecked, producing a tumor. The crucial factor in this phenomenon is an imbalance of growth-inducing and growth-inhibiting factors, which can result from changes in oncogenes — genes with the potential to cause cancer — for example. Precision oncology, a specialized field of personalized medicine, leverages this knowledge by using specific treatments such as low-molecular weight inhibitors and antibodies to target and disable hyperactive oncogenes.

The first step in identifying which genetic mutations are potential targets for treatment is to analyze the genetic makeup of the tumor tissue. The molecular variants of the tumor DNA that are necessary for precision diagnosis and treatment are determined. Then the doctors use this information to craft individual treatment recommendations. In especially complex cases, this requires knowledge from various fields of medicine. At Charité, this is when the “molecular tumor board” (MTB) meets: Experts from the fields of pathology, molecular pathology, oncology, human genetics, and bioinformatics work together to analyze which treatments seem most promising based on the latest studies. It is a very involved process, ultimately culminating in a personalized treatment recommendation.

Organoid model of a tumor. Unchecked cell growth and targeted treatments can be simulated in these models. © Ana Cristina Afonseca Pestana

Dr. Damian Rieke, a doctor at Charité, Prof. Ulf Leser and Xing David Wang of Humboldt-Universität zu Berlin, and Dr. Manuela Benary, a bioinformatics specialist at Charité, wondered whether artificial intelligence might be able to help at this juncture. In a study, they worked with other researchers to examine the possibilities and limitations of large language models such as ChatGPT in automatically scanning scientific literature with an eye to selecting personalized treatments.

“We prompted the models to identify personalized treatment options for fictitious cancer patients and then compared the results with the recommendations made by experts,” Rieke explains. His conclusion: “AI models were able to identify personalized treatment options in principle — but they weren’t even close to the abilities of human experts.”

The team created ten molecular tumor profiles of fictitious patients for the experiment. A human physician specialist and four large language models were then tasked with identifying a personalized treatment option. These results were presented to the members of the MTB for assessment, without them knowing where which recommendation came from.

“There were some surprisingly good treatment options identified by AI in isolated cases,” Benary reports. “But large language models perform much worse than human experts.” Beyond that, data protection, privacy, and reproducibility pose particular challenges in relation to the use of artificial intelligence with real-world patients, she notes.

Still, Rieke is fundamentally optimistic about the potential uses of AI in medicine: “In the study, we also showed that the performance of AI models is continuing to improve as the models advance. This could mean that AI can provide more support for even complex diagnostic and treatment processes in the future — as long as humans are the ones to check the results generated by AI and have the final say about treatment.”

Vision-controlled jetting for composite systems and robots

by Thomas J. K. Buchner, Simon Rogler, et al in Nature

3D printing is advancing rapidly, and the range of materials that can be used has expanded considerably. While the technology was previously limited to fast-curing plastics, it has now been made suitable for slow-curing plastics as well. These have decisive advantages as they have enhanced elastic properties and are more durable and robust.

The use of such polymers is made possible by a new technology developed by researchers at ETH Zurich and a US start-up. As a result, researchers can now 3D print complex, more durable robots from a variety of high-quality materials in one go. This new technology also makes it easy to combine soft, elastic, and rigid materials. The researchers can also use it to create delicate structures and parts with cavities as desired.

Using the new technology, researchers at ETH Zurich have succeeded for the first time in printing a robotic hand with bones, ligaments and tendons made of different polymers in one go. “We wouldn’t have been able to make this hand with the fast-curing polyacrylates we’ve been using in 3D printing so far,” explains Thomas Buchner, a doctoral student in the group of ETH Zurich robotics professor Robert Katzschmann and first author of the study. “We’re now using slow-curing thiolene polymers. These have very good elastic properties and return to their original state much faster after bending than polyacrylates.” This makes thiolene polymers ideal for producing the elastic ligaments of the robotic hand.

In addition, the stiffness of thiolenes can be fine-tuned very well to meet the requirements of soft robots. “Robots made of soft materials, such as the hand we developed, have advantages over conventional robots made of metal. Because they’re soft, there is less risk of injury when they work with humans, and they are better suited to handling fragile goods,” Katzschmann explains.

Multimaterial 3D printing of soft and hard materials at a high resolution via vision-controlled jetting.

3D printers typically produce objects layer by layer: nozzles deposit a given material in viscous form at each point; a UV lamp then cures each layer immediately. Previous methods involved a device that scraped off surface irregularities after each curing step. This works only with fast-curing polyacrylates. Slow-curing polymers such as thiolenes and epoxies would gum up the scraper.

To accommodate the use of slow-curing polymers, the researchers developed 3D printing further by adding a 3D laser scanner that immediately checks each printed layer for any surface irregularities. “A feedback mechanism compensates for these irregularities when printing the next layer by calculating any necessary adjustments to the amount of material to be printed in real time and with pinpoint accuracy,” explains Wojciech Matusik, a professor at the Massachusetts Institute of Technology (MIT) in the US and co-author of the study. This means that instead of smoothing out uneven layers, the new technology simply takes the unevenness into account when printing the next layer.

Seeing and understanding epistemic actions

by Sholei Croom, Hanbei Zhou, Chaz Firestone in Proceedings of the National Academy of Sciences

When researchers asked hundreds of people to watch other people shake boxes, it took just seconds for almost all of them to figure out what the shaking was for.

The deceptively simple work by Johns Hopkins University perception researchers is the first to demonstrate that people can tell what others are trying to learn just by watching their actions. The study reveals a key yet neglected aspect of human cognition, and one with implications for artificial intelligence.

“Just by looking at how someone’s body is moving, you can tell what they are trying to learn about their environment,” said author Chaz Firestone, an assistant professor of psychological and brain sciences who investigates how vision and thought interact. “We do this all the time, but there has been very little research on it.”

Recognizing another person’s actions is something we do every day, whether it’s guessing which way someone is headed or figuring out what object they’re reaching for. These are known as “pragmatic actions.” Numerous studies have shown people can quickly and accurately identify these actions just by watching them. The new Johns Hopkins work investigates a different kind of behavior: “epistemic actions,” which are performed when someone is trying to learn something.

For instance, someone might put their foot in a swimming pool because they’re going for a swim or they might put their foot in a pool to test the water. Though the actions are similar, there are differences and the Johns Hopkins team surmised observers would be able to detect another person’s “epistemic goals” just by watching them.

Top: Players were filmed trying to determine the contents of a box (specifically, the number or shape of the objects inside), only by shaking it. Later experiments vary the box’s contents. Bottom: Observers watched these videos and judged which came from which round: Who was shaking for number and who was shaking for shape?

Across several experiments, researchers asked a total of 500 participants to watch two videos in which someone picks up a box full of objects and shakes it around. One shows someone shaking a box to figure out the number of objects inside it. The other shows someone shaking a box to figure out the shape of the objects inside. Almost every participant knew who was shaking for the number and who was shaking for shape.

“What is surprising to me is how intuitive this is,” said lead author Sholei Croom, a Johns Hopkins graduate student. “People really can suss out what others are trying to figure out, which shows how we can make these judgments even though what we’re looking at is very noisy and changes from person to person.”

Added Firestone, “When you think about all the mental calculations someone must make to understand what someone else is trying to learn, it’s a remarkably complicated process. But our findings show it’s something people do easily.”

The findings could also inform the development of artificial intelligence systems designed to interact with humans. A commercial robot assistant, for example, that can look at a customer and guess what they’re looking for.

“It’s one thing to know where someone is headed or what product they are reaching for,” Firestone said. “But it’s another thing to infer whether someone is lost or what kind of information they are seeking.”

In the future the team would like to pursue whether people can observe someone’s epistemic intent versus their pragmatic intent — what are they up to when they dip their foot in the pool. They’re also interested in when these observational skills emerge in human development and if it’s possible to build computational models to detail exactly how observed physical actions reveal epistemic intent.

R-CNN based Polygonal Wedge Detection Learned from Annotated 3D Renderings and Mapped Photographs of Open Data Cuneiform Tablets. GCH 2023

by Stötzner E., Homburg T., Bullenkamp J.P. & Mara H in Eurographics Workshop on Graphics and Cultural Heritage

A new artificial intelligence (AI) software is now able to decipher difficult-to-read texts on cuneiform tablets. It was developed by a team from Martin Luther University Halle-Wittenberg (MLU), Johannes Gutenberg University Mainz, and Mainz University of Applied Sciences. Instead of photos, the AI system uses 3D models of the tablets, delivering significantly more reliable results than previous methods. This makes it possible to search through the contents of multiple tablets to compare them with each other. It also paves the way for entirely new research questions.

In their new approach, the researchers used 3D models of nearly 2,000 cuneiform tablets, including around 50 from a collection at MLU. According to estimates, around one million such tablets still exist worldwide. Many of them are over 5,000 years old and are thus among humankind’s oldest surviving written records. They cover an extremely wide range of topics: “Everything can be found on them: from shopping lists to court rulings. The tablets provide a glimpse into humankind’s past several millennia ago. However, they are heavily weathered and thus difficult to decipher even for trained eyes,” says Hubert Mara, an assistant professor at MLU.

This is because the cuneiform tablets are unfired chunks of clay into which writing has been pressed. To complicate matters, the writing system back then was very complex and encompassed several languages. Therefore, not only are optimal lighting conditions needed to recognise the symbols correctly, a lot of background knowledge is required as well. “Up until now it has been difficult to access the content of many cuneiform tablets at once — you sort of need to know exactly what you are looking for and where,” Mara adds.

His lab came up with the idea of developing a system of artificial intelligence which is based on 3D models. The new system deciphers characters better than previous methods. In principle, the AI system works along the same lines as OCR software (optical character recognition), which converts the images of writing and text in into machine-readable text. This has many advantages. Once converted into computer text, the writing can be more easily read or searched through. “OCR usually works with photographs or scans. This is no problem for ink on paper or parchment. In the case of cuneiform tablets, however, things are more difficult because the light and the viewing angle greatly influence how well certain characters can be identified,” explains Ernst Stötzner from MLU. He developed the new AI system as part of his master’s thesis under Hubert Mara.

The team trained the new AI software using three-dimensional scans and additional data. Much of this data was provided by Mainz University of Applied Sciences, which is overseeing a large edition project for 3D models of clay tablets. The AI system subsequently did succeed in reliably recognising the symbols on the tablets. “We were surprised to find that our system even works well with photographs, which are actually a poorer source material,” says Stötzner.

Visualization and analysis of skin strain distribution in various human facial actions

by Takeru MISU, Hisashi ISHIHARA, So NAGASHIMA, Yusuke DOI, Akihiro NAKATANI in Mechanical Engineering Journal

Robots able to display human emotion have long been a mainstay of science fiction stories. Now, Japanese researchers have been studying the mechanical details of real human facial expressions to bring those stories closer to reality.

In a recent study, a multi-institutional research team led by Osaka University have begun mapping out the intricacies of human facial movements. The researchers used 125 tracking markers attached to a person’s face to closely examine 44 different, singular facial actions, such as blinking or raising the corner of the mouth.

Every facial expression comes with a variety of local deformation as muscles stretch and compress the skin. Even the simplest motions can be surprisingly complex. Our faces contain a collection of different tissues below the skin, from muscle fibers to fatty adipose, all working in concert to convey how we’re feeling. This includes everything from a big smile to a slight raise of the corner of the mouth. This level of detail is what makes facial expressions so subtle and nuanced, in turn making them challenging to replicate artificially. Until now, this has relied on much simpler measurements, of the overall face shape and motion of points chosen on skin before and after movements.

“Our faces are so familiar to us that we don’t notice the fine details,” explains Hisashi Ishihara, main author of the study. “But from an engineering perspective, they are amazing information display devices. By looking at people’s facial expressions, we can tell when a smile is hiding sadness, or whether someone’s feeling tired or nervous.”

Information gathered by this study can help researchers working with artificial faces, both created digitally on screens and, ultimately, the physical faces of android robots. Precise measurements of human faces, to understand all the tensions and compressions in facial structure, will allow these artificial expressions to appear both more accurate and natural.

“The facial structure beneath our skin is complex,” says Akihiro Nakatani, senior author. “The deformation analysis in this study could explain how sophisticated expressions, which comprise both stretched and compressed skin, can result from deceivingly simple facial actions.”

This work has applications beyond robotics as well, for example, improved facial recognition or medical diagnoses, the latter of which currently relies on doctor intuition to notice abnormalities in facial movement.

So far, this study has only examined the face of one person, but the researchers hope to use their work as a jumping off point to gain a fuller understanding of human facial motions. As well as helping robots to both recognize and convey emotion, this research could also help to improve facial movements in computer graphics, like those used in movies and video games, helping to avoid the dreaded ‘uncanny valley’ effect.

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--