RT/ Soft robot detects damage, heals itself
Robotics biweekly vol.64, 29th November — 15th December
TL;DR
- Engineers have created a soft robot capable of detecting when and where it was damaged — and then healing itself on the spot.
- Researchers have developed a synthetic system that responds to environmental changes in the same way as living organisms, using a feedback loop to maintain its internal conditions. This not only keeps the material’s conditions stable but also makes it possible to build mechanisms that react dynamically to their environment, an important trait for interactive materials and soft robotics.
- Machine learning drives self-discovery of pulses that stabilize quantum systems in the face of environmental noise.
- AI shows potential in creating literary art rivaling that of humans without human help. AI-generated haiku without human intervention was compared with a contrasting method. The evaluators found it challenging to distinguish between the haiku penned by humans and those generated by AI. Evaluators showed algorithm aversion when unconsciously giving lower scores to those they felt were AI-generated.
- Imagine a team of humans and robots working together to process online orders — real-life workers strategically positioned among their automated coworkers who are moving intelligently back and forth in a warehouse space, picking items for shipping to the customer. This could become a reality sooner than later, thanks to researchers who are working to speed up the online delivery process by developing a software model designed to make ‘transport’ robots smarter.
- A multi-institution research team has developed an optical chip that can train machine learning hardware.
- Researchers created OmniWheg, a robotic system that can adapt its configuration while navigating its surrounding environment, seamlessly changing from a wheeled to a legged robot.
- Researchers has developed a very simple, small, soft-bodied robot based on hair-clip technology.
- Researchers used deep reinforcement learning to steer atoms into a lattice shape, with a view to building new materials or nanodevices.
- Lightweight robotic leg prosthesis replicates the biomechanics of the knee, ankle and toe joint.
- Robotics upcoming events. And more!
Robotics market
The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025.
Size of the global market for industrial and non-industrial robots between 2018 and 2025 (in billion U.S. dollars):
Latest News & Research
Autonomous self-healing optical sensors for damage intelligent soft-bodied systems
by Hedan Bai, Young Seong Kim, Robert F. Shepherd in Science Advances
Cornell University engineers have created a soft robot capable of detecting when and where it was damaged — and then healing itself on the spot.
“Our lab is always trying to make robots more enduring and agile, so they operate longer with more capabilities,” said Rob Shepherd, associate professor of mechanical and aerospace engineering. “If you make robots operate for a long time, they’re going to accumulate damage. And so how can we allow them to repair or deal with that damage?”
Shepherd’s Organic Robotics Lab has developed stretchable fiber-optic sensors for use in soft robots and related components — from skin to wearable technology.
For self-healing to work, Shepard says the key first step is that the robot must be able to identify that there is, in fact, something that needs to be fixed. To do this, researchers have pioneered a technique using fiber-optic sensors coupled with LED lights capable of detecting minute changes on the surface of the robot. These sensors are combined with a polyurethane urea elastomer that incorporates hydrogen bonds, for rapid healing, and disulfide exchanges, for strength. The resulting SHeaLDS — self-healing light guides for dynamic sensing — provides a damage-resistant soft robot that can self-heal from cuts at room temperature without any external intervention.
To demonstrate the technology, the researchers installed the SHeaLDS in a soft robot resembling a four-legged starfish and equipped it with feedback control. Researchers then punctured one of its legs six times, after which the robot was then able to detect the damage and self-heal each cut in about a minute. The robot could also autonomously adapt its gait based on the damage it sensed. While the material is sturdy, it is not indestructible.
“They have similar properties to human flesh,” Shepherd said. “You don’t heal well from burning, or from things with acid or heat, because that will change the chemical properties. But we can do a good job of healing from cuts.”
Shepherd plans to integrate SHeaLDS with machine learning algorithms capable of recognizing tactile events to eventually create “a very enduring robot that has a self-healing skin but uses the same skin to feel its environment to be able to do more tasks.”
Feedback-controlled hydrogels with homeostatic oscillations and dissipative signal transduction
by Hang Zhang, Hao Zeng, Amanda Eklund, Hongshuang Guo, Arri Priimagi, Olli Ikkala in Nature Nanotechnology
Researchers have developed a synthetic system that responds to environmental changes in the same way as living organisms, using a feedback loop to maintain its internal conditions. This not only keeps the material’s conditions stable but also makes it possible to build mechanisms that react dynamically to their environment, an important trait for interactive materials and soft robotics.
Living systems, from individual cells up to organisms, use feedback systems to maintain their conditions. For example, we sweat to cool down when we’re too warm, and a variety of systems work to keep our blood pressure and chemistry in the right range. These homeostatic systems make living organisms robust by enabling them to cope with changes in their environment. While feedback is important in some artificial systems, such as thermostats, they don’t have the dynamic adaptability or robustness of homeostatic living systems.
Now, researchers at Aalto University and Tampere University have developed a system of materials that maintains its state in a manner similar to living systems. The new system consists of two side-by-side gels with different properties. Interactions between the gels make the system respond homeostatically to environmental changes, keeping its temperature within a narrow range when stimulated by a laser.
‘The tissues of living organisms are typically soft, elastic and deformable,’ says Hang Zhang, an Academy of Finland postdoctoral researcher at Aalto who was one of the lead authors of the study. ‘The gels used in our system are similar. They are soft polymers swollen in water, and they can provide a fascinating variety of responses upon environmental stimuli.’
The laser shines through the first gel and then bounces off a mirror onto the second gel, where it heats suspended gold nanoparticles. The heat moves through the second gel to the first, raising its temperature. The first gel is only transparent when it is below a specific temperature; once it gets hotter, it becomes opaque. This change stops the laser from reaching the mirror and heating the second gel. The two gels then cool down until the first becomes transparent again, at which point the laser passes through and the heating process starts again.
In other words, the arrangement of the laser, gels and mirror creates a feedback loop that keeps the gels at a specific temperature. At higher temperatures, the laser is blocked and can’t heat the gold nanoparticles; at lower temperatures, the first gel becomes transparent, so the laser shines through and heats the gold particles.
‘Like a living system, our homeostatic system is dynamic. The temperature oscillates around the threshold, but the range of the oscillation is pretty small and is robust to outside disturbances. It’s a robust homeostatic system,’ says Hao Zeng, an Academy of Finland research fellow at Tampere University who was the other lead author of the study.
The researchers then built touch-responsive triggers on top of the feedback system. To accomplish this, they added mechanical components that respond to changes in temperature. Touching the gel system in the right way pushes it out of its steady state, and the resulting change in temperature causes the mechanical component to deform. Afterwards, everything returns to its original condition.
The team designed two systems that respond to different types of touch. In one case, a single touch triggers the response, just as a touch-me-not mimosa plant folds its leaves when stroked. The second setup only responds to repeated touches, in the same way as a Venus flytrap needs to be touched twice in 30 seconds to make it snap shut.
‘We can trigger a snapping behaviour with mechanical touches at suitable intervals, just like a Venus flytrap. Our artificial material system can discriminate between low-frequency and high-frequency touches,’ explains Professor Arri Priimägi of Tampere University.
The researchers also showed how the homeostatic system could control a dynamic colour display or even push cargo along its body. They emphasize that these demonstrations showcase only a handful of the possibilities opened up by the new material concept.
‘Life-inspired materials offer a new paradigm for dynamic and adaptive materials which will likely attract researchers for years to come,’ says Professor Olli Ikkala of Aalto University. ‘Carefully designed systems that mimic some of the basic behaviours of living systems will pave the way for truly smart materials and interactive soft robotics.’
Accelerated motional cooling with deep reinforcement learning
by Bijita Sarma, Sangkha Borah, A Kani, Jason Twamley in Physical Review Research
It’s easy to control the trajectory of a basketball: all we have to do is apply mechanical force coupled with human skill. But controlling the movement of quantum systems such as atoms and electrons is much more challenging, as these minuscule scraps of matter often fall prey to perturbations that knock them off their path in unpredictable ways. Movement within the system degrades — a process called damping — and noise from environmental effects such as temperature also disturbs its trajectory.
One way to counteract the damping and the noise is to apply stabilizing pulses of light or voltage of fluctuating intensity to the quantum system. Now researchers from Okinawa Institute of Science and Technology (OIST) in Japan have shown that they can use artificial intelligence to discover these pulses in an optimized way to appropriately cool a micro-mechanical object to its quantum state and control its motion.
Micro-mechanical objects, which are large compared to an atom or electron, behave classically when kept at a high temperature, or even at room temperature. However, if such mechanical modes can be cooled down to their lowest energy state, which physicists call the ground state, quantum behaviour could be realised in such systems. These kinds of mechanical modes then can be used as ultra-sensitive sensors for force, displacement, gravitational acceleration etc. as well as for quantum information processing and computing.
“Technologies built from quantum systems offer immense possibilities,” said Dr. Bijita Sarma, the article’s lead author and a Postdoctoral Scholar at OIST Quantum Machines Unit in the lab of Professor Jason Twamley. “But to benefit from their promise for ultraprecise sensor design, high-speed quantum information processing, and quantum computing, we must learn to design ways to achieve fast cooling and control of these systems.”
The machine learning-based method that she and her colleagues designed demonstrates how artificial controllers can be used to discover non-intuitive, intelligent pulse sequences that can cool a mechanical object from high to ultracold temperatures faster than other standard methods. These control pulses are self-discovered by the machine learning agent. The work showcases the utility of artificial machine intelligence in the development of quantum technologies.
Quantum computing has the potential to revolutionise the world by enabling high computing speeds and reformatting cryptographic techniques. That is why, many research institutes and big-tech companies such as Google and IBM are investing a lot of resources in developing such technologies. But to enable this, researchers must achieve complete control over the operation of such quantum systems at very high speed, so that the effects of noise and damping can be eliminated.
“In order to stabilize a quantum system, control pulses must be fast — and our artificial intelligence controllers have shown the promise to achieve such feat,” Dr Sarma said. “Thus, our proposed method of quantum control using an AI controller could provide a breakthrough in the field of high-speed quantum computing, and it might be a first step to achieve quantum machines that are self-driving, similar to self-driving cars. We are hopeful that such methods will attract many quantum researchers for future technological developments.”
Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry
by Jimpei Hitsuwari, Yoshiyuki Ueda, Woojin Yun, Michio Nomura in Computers in Human Behavior
Can artificial intelligence write better poetry than humans?
The gap between human creativity and artificial intelligence seems to be narrowing. Previous studies have compared AI-generated versus human-written poems and whether people can distinguish between them. Now, a study led by Yoshiyuki Ueda at Kyoto University Institute for the Future of Human and Society, has shown AI’s potential in creating literary art such as haiku — the shortest poetic form in the world — rivaling that of humans without human help.
Ueda’s team compared AI-generated haiku without human intervention, also known as human out of the loop, or HOTL, with a contrasting method known as human in the loop, or HITL. The project involved 385 participants, each of whom evaluated 40 haiku poems — 20 each of HITL and HOTL — plus 40 composed entirely by professional haiku writers.
“It was interesting that the evaluators found it challenging to distinguish between the haiku penned by humans and those generated by AI,” remarks Ueda.
From the results, HITL haiku received the most praise for their poetic qualities, whereas HOTL and human-only verses had similar scores.
“In addition, a phenomenon called algorithm aversion was observed among our evaluators. They were supposed to be unbiased but instead became influenced by a kind of reverse psychology,” explains the author. “In other words, they tended to unconsciously give lower scores to those they felt were AI-generated.”
Ueda points out that his research has put a spotlight on algorithm aversion as a new approach to AI art.
“Our results suggest that the ability of AI in the field of haiku creation has taken a leap forward, entering the realm of collaborating with humans to produce more creative works. Realizing the existence of algorithmic aversion will lead people to re-evaluate their appreciation of AI art.”
Collaborative order picking with multiple pickers and robots: Integrated approach for order batching, sequencing and picker-robot routing
by Sharan Srinivas, Shitao Yu in International Journal of Production Economics
Imagine a team of humans and robots working together to process online orders — real-life workers strategically positioned among their automated coworkers who are moving intelligently back and forth in a warehouse space, picking items for shipping to the customer. This could become a reality sooner than later, thanks to researchers at the University of Missouri, who are working to speed up the online delivery process by developing a software model designed to make “transport” robots smarter.
“The robotic technology already exists,” said Sharan Srinivas, an assistant professor with a joint appointment in the Department of Industrial and Manufacturing Systems Engineering and the Department of Marketing. “Our goal is to best utilize this technology through efficient planning. To do this, we’re asking questions like ‘given a list of items to pick, how do you optimize the route plan for the human pickers and robots?’ or ‘how many items should a robot pick in a given tour? or ‘in what order should the items be collected for a given robot tour?’ Likewise, we have a similar set of questions for the human worker. The most challenging part is optimizing the collaboration plan between the human pickers and robots.”
Currently, a lot of human effort and labor costs are involved with fulfilling online orders. To help optimize this process, robotic companies have already developed collaborative robots — also known as cobots or autonomous mobile robots (AMRs) — to work in a warehouse or distribution center. The AMRs are equipped with sensors and cameras to help them navigate around a controlled space like a warehouse. The proposed model will help create faster fulfillment of customer orders by optimizing the key decisions or questions pertaining to collaborative order picking, Srinivas said.
“The robot is intelligent, so if it’s instructed to go to a particular location, it can navigate the warehouse and not hit any workers or other obstacles along the way,” Srinivas said.
Srinivas, who specializes in data analytics and operations research, said AMRs are not designed to replace human workers, but instead can work collaboratively alongside them to help increase the efficiency of the order fulfillment process. For instance, AMRs can help fulfill multiple orders at a time from separate areas of the warehouse quicker than a person, but human workers are still needed to help pick items from shelves and place them onto the robots to be transported to a designated drop-off point inside the warehouse.
“The one drawback is these robots do not have good grasping abilities,” Srinivas said. “But humans are good at grasping items, so we are trying to leverage the strength of both resources — the human workers and the collaborative robots. So, what happens in this case is the humans are at different points in the warehouse, and instead of one worker going through the entire isle to pick up multiple items along the way, the robot will come to the human worker, and the human worker will take an item and put it on the robot. Therefore, the human worker will not have to strain himself or herself in order to move large carts of heavy items throughout the warehouse.”
Srinivas said a future application of their software could also be applied in other locations such as grocery stores, where robots could be used to fill orders while also navigating among members of the general public. He could see this potentially happening within the next three-to-five years.
Silicon photonic architecture for training deep neural networks with direct feedback alignment
by Matthew J. Filipovich, Zhimu Guo, Mohammed Al-Qadasi, Bicky A. Marquez, Hugh D. Morison, Volker J. Sorger, Paul R. Prucnal, Sudip Shekhar, Bhavin J. Shastri in Optica
A multi-institution research team has developed an optical chip that can train machine learning hardware.
Machine learning applications skyrocketed to $165B annually, according to a recent report from McKinsey. But, before a machine can perform intelligence tasks such as recognizing the details of an image, it must be trained. Training of modern-day artificial intelligence (AI) systems like Tesla’s autopilot costs several million dollars in electric power consumption and requires supercomputer-like infrastructure. This surging AI “appetite” leaves an ever-widening gap between computer hardware and demand for AI. Photonic integrated circuits, or simply optical chips, have emerged as a possible solution to deliver higher computing performance, as measured by the number of operations performed per second per watt used, or TOPS/W. However, though they’ve demonstrated improved core operations in machine intelligence used for data classification, photonic chips have yet to improve the actual front-end learning and machine training process.
Machine learning is a two-step procedure. First, data is used to train the system and then other data is used to test the performance of the AI system. IIn a new paper, a team of researchers from the George Washington University, Queens University, University of British Columbia and Princeton University set out to do just that. After one training step, the team observed an error and reconfigured the hardware for a second training cycle followed by additional training cycles until a sufficient AI performance was reached (e.g. the system is able to correctly label objects appearing in a movie). Thus far, photonic chips have only demonstrated an ability to classify and infer information from data. Now, researchers have made it possible to speed up the training step itself.
This added AI capability is part of a larger effort around photonic tensor cores and other electronic-photonic application-specific integrated circuits (ASIC) that leverage photonic chip manufacturing for machine learning and AI applications.
“This novel hardware will speed up the training of machine learning systems and harness the best of what both photonics and electronic chips have to offer. It is a major leap forward for AI hardware acceleration. These are the kinds of advancements we need in the semiconductor industry as underscored by the recently passed CHIPS Act.” -Volker Sorger, Professor of Electrical and Computer Engineering at the George Washington University and founder of the start-up company Optelligence.
“The training of AI systems costs a significant amount of energy and carbon footprint. For example, a single AI transformer takes about five times as much CO2 in electricity as a gasoline car spends in its lifetime. Our training on photonic chips will help to reduce this overhead.”
OmniWheg: An Omnidirectional Wheel-Leg Transformable Robot
by Ruixiang Cao et al in arXiv
Researchers at Worcester Polytechnic Institute recently created OmniWheg, a robotic system that can adapt its configuration while navigating its surrounding environment, seamlessly changing from a wheeled to a legged robot. This robot, introduced in an IEEE IROS 2022 paper, is based on an updated version of the so-called “whegs,” a series of mechanisms design to transform a robot’s wheels or wings into legs.
“Quadruped and biped robots have been growing in popularity, and the reason for that might be the search for ‘anthropomorphization’ that the general audience commonly engages in,” Prof. Andre Rosendo, one of the researchers who developed the robot, told TechXplore. “While ‘being capable of going everywhere we go’ sounds like an exciting appeal, the energetic cost of legs is very high. We humans have legs because that is what evolution gave us, but we wouldn’t dare to create a ‘legged car,’ as we know that this ride wouldn’t be as comfortable or energy efficient as a wheeled car ride.”
The key idea behind the recent work by Rosendo and his colleagues is that while legs make robots more relatable, giving them a human- or animal-like quality, they are not always the optimal solution to ensure that robots complete tasks quickly and efficiently. Instead of developing a robot with a single locomotion mechanism, the team thus set out to create a system that can switch between different mechanisms.
“Looking around our homes and workplaces we can see that our environments are 95% flat, with an eventual 5% of uneven terrain that we need to face when ‘transitioning’ between spaces,” Rosendo said. “With this in mind, why not develop a system that performs at a ‘wheel-like’ efficiency in these 95% of cases and specifically transitions to a lower efficiency in the remaining 5%?”
Rosendo and his colleagues set out to create a wheel that could change its configuration to climb stairs or circumvent other small obstacles. To accomplish this, they explored the concept of “whegs” (i.e., wheel-legs or wing-legs), which has been around for over a decade and has since received considerable attention in the field of robotics.
Several wheel-leg systems were developed and tested in the past few years. However, most of these systems did not perform particularly well, mainly due to difficulties in coordinating the right and left side of the wheel-leg system, which need to be perfectly aligned when a robot is climbing stairs.
“To solve the coordination issues commonly associated with wheel-leg mechanisms, we used an omnidirectional wheel,” explained Ruixiang Cao, the leading student behind the creation. “This is the last piece of the puzzle, as it enables the robot to align on-the-fly without rotating its body. Our robot can move forward, backwards, and sideways at a very low energy cost, can remain in a stable position with no energetic cost, and can swiftly climb stairs when needed.”
To operate correctly, the wheg system created by Rosendo and his colleagues requires the addition of one servo motor per wheel and a simple algorithm. Other than that, its design is basic and straightforward, so it could be easily replicated by other teams worldwide.
“The advantages of this system are so abundant, and the drawbacks are so few that we can’t help but think that they pose a threat to the ‘legged robot hype’ seen in the robotics field,” Cao said. “Any robot application that has an eventual need to climb stairs could adopt this design, especially if paired with a robot manipulator to manipulate objects when running over the flat ground while shifting its center of gravity when climbing stairs.”
The researchers evaluated their OmniWheg system in a series of experiments focusing on a multitude of real-world indoor scenarios, such as circumventing obstacles, climbing steps of different heights and turning/moving omnidirectionally. Their results were highly promising, as their wheel-leg robot could successfully overcome all the common obstacles it was tested on, flexibly and efficiently adapting its configuration to effectively tackle individual locomotion challenges.
In the future, the system created by Rosendo and his colleagues could be integrated in both existing and new robots, to enhance their efficiency in navigating indoor environments. In addition, the team’s work could inspire the development of similar wheg systems based on omnidirectional wheels.
“Our first design iteration adopted a fairly ‘expensive’ brushless motor, and we now think that a lighter motor, paired with a gear reduction, would have been more effective,” Rosendo added. “We also plan on adding a manipulator to the base of the robot so that we can test the dynamics of ascending and descending stairs with a higher center of gravity.”
Fast Untethered Soft Robotic Crawler with Elastic Instability
by Zechen Xiong et al in arXiv
A trio of researchers at Columbia University has developed a very simple, small, soft-bodied robot based on hair-clip technology. Zechen Xiong, Yufeng Su and Hod Lipson have written a paper describing the idea behind their robot design and the two robots they built.
As scientists continually look for new ways to build small, soft-bodied robots, they often turn to existing animals or other devices that maximize simplicity and energy efficiency. In this new effort, the researchers noticed, as have many others, that a certain kind of hair clip can exist in one of two states — bent one way or the other; moving between the two states requires little energy and it happens quickly. Inspired by the simplicity of the design, they created the basis of a robot.
The basic design consisted of cutting a flat, bendable piece of plastic into the form of an exaggerated C. They then pulled the two open ends of the plastic piece close to one another and fastened them together. This was all it took to emulate a hair-clip. They next attached a small motor to apply the pressure that is normally applied by the fingers to a hair clip. Using a small amount of electricity, the servo motor could push the plastic into one or the other of its shapes, and it would happen just as quickly as with a hair clip.
Next, the researchers added foot-like appendages to complete their robot. Using the motor to push the frame between states pushed the feet back and forth, allowing the robot to walk across a hard surface. Testing showed it was able to walk at a top pace of 313 mm/sec which translated roughly to 1.6 body lengths per second. They also fashioned their frame into a fish-like robot and found it could make its way through the water at approximately 435 mm/second, which translated to approximately two body lengths per second. The researchers claim both speeds are faster than other, similar robots.
Precise atom manipulation through deep reinforcement learning
by I-Ju Chen et al in Nature Communications
Researchers used deep reinforcement learning to steer atoms into a lattice shape, with a view to building new materials or nanodevices.
In a very cold vacuum chamber, single atoms of silver form a star-like lattice. The precise formation is not accidental, and it wasn’t constructed directly by human hands either. Researchers used a kind of artificial intelligence called deep reinforcement learning to steer the atoms, each a fraction of a nanometer in size, into the lattice shape. The process is similar to moving marbles around a Chinese checkers board, but with very tiny tweezers grabbing and dragging each atom into place. The main application for deep reinforcement learning is in robotics, says postdoctoral researcher I-Ju Chen.
“We’re also building robotic arms with deep learning, but for moving atoms,” she explains. “Reinforcement learning is successful in things like playing chess or video games, but we’ve applied it to solve technical problems at the nanoscale.”
So why are scientists interested in precisely moving atoms? Making very small devices based on single atoms is important for nanodevices like transistors or memory. Testing how and whether these devices work at their absolute limits is one application for this kind of atomic manipulation, says Chen. Building new materials atom-by-atom, rather than through traditional chemical techniques, may also reveal interesting properties related to superconductivity or quantum states.
The silver star lattice made by Chen and colleagues at the Finnish Center for Artificial Intelligence FCAI and Aalto University is a demonstration of what deep reinforcement learning can achieve. “The precise movement of atoms is hard even for human experts,” says Chen.
“We adapted existing deep reinforcement learning for this purpose. It took the algorithm on the order of one day to learn and then about one hour to build the lattice.” The reinforcement part of this type of deep learning refers to how the AI is guided — through rewards for correct actions or outputs. “Give it a goal and it will do it. It can solve problems that humans don’t know how to solve.”
Applying this approach to the world of nanoscience materials is new. Nanotechniques can become more powerful with the injection of machine learning, says Chen, because it can accelerate the parameter selection and trial-and-error usually done by a person.
“We showed that this task can be completed perfectly through reinforcement learning,” concludes Chen.
A lightweight robotic leg prosthesis replicating the biomechanics of the knee, ankle, and toe joint
by Minh Tran et al in Science Robotics
The Utah Bionic Leg, a motorized prosthetic for lower-limb amputees developed by University of Utah mechanical engineering associate professor Tommaso Lenzi and his students in the HGN Lab, is on the cover of the newest issue of Science Robotics.
Lenzi’s Utah Bionic Leg uses motors, processors, and advanced artificial intelligence that all work together to give amputees more power to walk, stand up, sit down, and ascend and descend stairs and ramps. The extra power from the prosthesis makes these activities easier and less stressful for amputees, who normally need to overuse their upper body and intact leg to compensate for the lack of assistance from their prescribed prosthetics. The Utah Bionic Leg will help people with amputations, particularly elderly individuals, to walk much longer and attain new levels of mobility.
“If you walk faster, it will walk faster for you and give you more energy. Or it adapts automatically to the height of the steps in a staircase. Or it can help you cross over obstacles,” Lenzi says.
The Utah Bionic Leg uses custom-designed force and torque sensors as well as accelerometers and gyroscopes to help determine the leg’s position in space. Those sensors are connected to a computer processor that translates the sensor inputs into movements of the prosthetic joints.
Based on that real-time data, the leg provides power to the motors in the joints to assist in walking, standing up, walking up and down stairs, or maneuvering around obstacles. The leg’s “smart transmission system” connects the electrical motors to the robotic joints of the prosthetic. This optimized system automatically adapts the joint behaviors for each activity, like shifting gears on a bike.
Finally, in addition to the robotic knee joint and robotic ankle joint, the Utah Bionic Leg has a robotic toe joint to provide more stability and comfort while walking. The sensors, processors, motors, transmission system, and robotic joints enable users to control the prosthetic intuitively and continuously, as if it was an intact biological leg.
Lenzi and the university recently forged a new partnership with the worldwide leader in the prosthetics industry, Ottobock, to license the technology behind the Utah Bionic Leg and bring it to individuals with lower-limb amputations.
Videos
- Xenoforms is an installation work that consists of 3D prints of parametric models, video animations and visualizations, posters with technical diagrams, and a 6-axis 3D printer. In this work, a series of three-dimensional forms have been automatically generated by an artificial system that attempts to identify design decisions for an efficient, sustainable, and durable structure. The work provides a speculative scenario that demonstrates how an autonomous A.I. system follows its own strategies for colonizing the architectural space and as an extension to become a human symbiont.
- Tiny, autonomous drones that harness the power of artificial intelligence to work together. In this case, the minute robots could one day provide backup to pollinators like honey bees, potentially securing the world’s food crops as these critical insect species face challenges from fungal disease, pesticides and climate change. The project is led by doctoral student Chahat Deep Singh M.E. ’18 of the Perception and Robotics Group, led by Professor Yiannis Aloimonos and Research Scientist Cornelia Fermüller.
- The ANYexo 2.0 is the latest prototype based on around two decades of research at the Sensory-Motor Systems Lab and Robotic Systems Lab of ETH Zürich. This video shows uncommented impressions of the main features of ANYexo 2.0 and its performance in range of motion, speed, strength, haptic transparency, and human-robot attachment system.
- Take a tour through our new ABB Robotics mega factory in Shanghai, China and see how we’re bringing the physical and digital worlds together for faster, more resilient and more efficient manufacturing and research.
Upcoming events
CoRL 2022: 14–18 December 2022, Auckland, New Zealand
ICRA 2023: 29 May–2 June 2023, London, UK
RoboCup 2023: 4–10 July 2023, Bordeaux, France
RSS 2023: 10–14 July 2023, Daegu, Korea
IEEE RO-MAN 2023: 28–31 August 2023, Busan, Korea
MISC
Subscribe to Paradigm!
Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.
Main sources
Research articles