RT/ Printing sensors directly on expanding organs

Paradigm
Paradigm
Published in
21 min readJun 25, 2020

Robotics biweekly vol.7, 11th June — 25th June

TL;DR

  • Mechanical engineers and computer scientists have developed a 3D printing technique that uses motion capture technology, similar to that used in Hollywood movies, to print electronic sensors directly on organs that are expanding and contracting.
  • A new film made of gold nanoparticles changes color in response to any type of movement. Its unprecedented qualities could allow robots to mimic chameleons and octopi — among other futuristic applications.
  • Researchers have developed a technique, using artificial intelligence, to analyze opinions and draw conclusions using the brain activity of groups of people. This technique, which the researchers call ‘’brainsourcing’’, can be used to classify images or recommend content, something that has not been demonstrated before.
  • By chasing cockroaches through an obstacle course and studying their movements, the engineers that brought you the cockroach robot and the snake robot discovered that animals’ movement transitions corresponded to overcoming potential energy barriers and that they can jitter around to traverse obstacles in complex terrain.
  • Scientists at the University of Sydney have adapted techniques from autonomous vehicles and robotics to efficiently assess the performance of quantum devices, an important process to help stabilise the emerging technologies.
  • Teaching physics to neural networks enables those networks to better adapt to chaos within their environment. The work has implications for improved artificial intelligence (AI) applications ranging from medical diagnostics to automated drone piloting.
  • It’s important that self-driving cars quickly detect other cars or pedestrians sharing the road. Researchers at Carnegie Mellon University have shown that they can significantly improve detection accuracy by helping the vehicle also recognize what it doesn’t see.
  • Biocompatible cell robots powered by urea improve drug delivery through active movement.
  • Morphology-dependent immunogenicity obliges a compromise on the locomotion-focused design of medical microrobots.
  • For the next several months, visitors to the Atlanta Botanical Garden will be able to observe the testing of a new high-tech tool in the battle to save some of the world’s most endangered species. SlothBot, a slow-moving and energy-efficient robot that can linger in the trees to monitor animals, plants, and the environment below, will be tested near the Garden’s popular Canopy Walk.
  • Spot, the Boston Dynamics robot dog that has gone viral in YouTube videos, is finally for sale to the general public.
  • Can collaborative robots ramp up the production of medical ventilators?
  • Two more interviews this week of celebrity roboticists from MassRobotics: Helen Greiner and Marc Raibert.
  • Check out robotics upcoming events (mostly virtual) below. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

News

You Can Finally Buy the Internet’s Favorite Robot Dog (for $74,500)

  • Spot, the Boston Dynamics robot dog that has gone viral in YouTube videos, is finally for sale to the general public.
  • It’ll cost you $74,500 to take one of these good boys home.
  • Last September, Boston Dynamics made Spot available for lease, mostly for corporate use.

Boston Dynamics will finally sell you one of its famous robot dogs, called Spot. But this good boy comes with a hefty price tag: $74,500.

You’ll have to put down a $1,000 deposit and wait 6 to 8 weeks for your robot to come in the mail, but it’s probably worth it if you’re looking to set up autonomous data collection for a job site (or if you’re pursuing YouTube stardom).

“The combination of Spot’s sophisticated software and high performance mechanical design enables the robot to augment difficult or dangerous human work,” Marc Raibert, chairman and founder of Boston Dynamics, said in a release. “Now you can use Spot to increase human safety in environments and tasks where traditional automation hasn’t been successful.”

Boston Dynamics says Spot is ready to go as soon as you open the box. The robo-pup also comes with two batteries (replacement cost: $4,620), a Spot charger, a tablet controller and accompanying charger, a rugged case for storage and transport, a power case for the battery and charger equipment, a Python software package for Spot’s API, and a standard warranty.

Although you can buy a Spot robot for personal use, you should think long and hard about it, as Boston Dynamics says the robot isn’t certified for in-home use — especially near children. “Do not operate Spot in any such environment; our warranty of Spot becomes void upon any such operation,” the company notes on its FAQ page. But hey, you do you.

Back in September, Boston Dynamics made Spot available for commercial lease under its Early Adopter Program, but didn’t publicly release figures about pricing. The company says it’s sent out over 150 Spot robots under that program, and they’ve been used in “power generation facilities, decommissioned nuclear sites, factory floors, construction sites, and research laboratories.” Spot was even used in creative projects, like dancing on stage and performing in theme parks.

Publicly, we’ve seen Spot help bomb squads, work on an oil rig, serve as a social distancing patrol dog in Singapore, and even work as a telehealth assistant, helping medical workers triage possible COVID-19 patients in a safe manner.

‘SlothBot in the Garden’ demonstrates hyper-efficient conservation robot

For the next several months, visitors to the Atlanta Botanical Garden will be able to observe the testing of a new high-tech tool in the battle to save some of the world’s most endangered species. SlothBot, a slow-moving and energy-efficient robot that can linger in the trees to monitor animals, plants, and the environment below, will be tested near the Garden’s popular Canopy Walk.

Built by robotics engineers at the Georgia Institute of Technology to take advantage of the low-energy lifestyle of real sloths, SlothBot demonstrates how being slow can be ideal for certain applications. Powered by solar panels and using innovative power management technology, SlothBot moves along a cable strung between two large trees as it monitors temperature, weather, carbon dioxide levels, and other information in the Garden’s 30-acre midtown Atlanta forest.

“SlothBot embraces slowness as a design principle,” said Magnus Egerstedt, professor and Steve W. Chaddick School Chair in the Georgia Tech School of Electrical and Computer Engineering. “That’s not how robots are typically designed today, but being slow and hyper-energy efficient will allow SlothBot to linger in the environment to observe things we can only see by being present continuously for months, or even years.”

About three feet long, SlothBot’s whimsical 3D-printed shell helps protect its motors, gearing, batteries, and sensing equipment from the weather. The robot is programmed to move only when necessary, and will locate sunlight when its batteries need recharging. At the Atlanta Botanical Garden, SlothBot will operate on a single 100-foot cable, but in larger environmental applications, it will be able to switch from cable to cable to cover more territory.

“The most exciting goal we’ll demonstrate with SlothBot is the union of robotics and technology with conservation,” said Emily Coffey, vice president for conservation and research at the Garden. “We do conservation research on imperiled plants and ecosystems around the world, and SlothBot will help us find new and exciting ways to advance our research and conservation goals.”

Supported by the National Science Foundation and the Office of Naval Research, SlothBot could help scientists better understand the abiotic factors affecting critical ecosystems, providing a new tool for developing information needed to protect rare species and endangered ecosystems.

“SlothBot could do some of our research remotely and help us understand what’s happening with pollinators, interactions between plants and animals, and other phenomena that are difficult to observe otherwise,” Coffey added. “With the rapid loss of biodiversity and with more than a quarter of the world’s plants potentially heading toward extinction, SlothBot offers us another way to work toward conserving those species.”

Inspiration for the robot came from a visit Egerstedt made to a vineyard in Costa Rica where he saw two-toed sloths creeping along overhead wires in their search for food in the tree canopy. “It turns out that they were strategically slow, which is what we need if we want to deploy robots for long periods of time,” he said.

A few other robotic systems have already demonstrated the value of slowness. Among the best known are the Mars Exploration Rovers that gathered information on the red planet for more than a dozen years. “Speed wasn’t really all that important to the Mars Rovers,” Egerstedt noted. “But they learned a lot during their leisurely exploration of the planet.”

Beyond conservation, SlothBot could have applications for precision agriculture, where the robot’s camera and other sensors traveling in overhead wires could provide early detection of crop diseases, measure humidity, and watch for insect infestation. After testing in the Atlanta Botanical Garden, the researchers hope to move SlothBot to South America to observe orchid pollination or the lives of endangered frogs.

The research team, which includes Ph.D students Gennaro Notomista and Yousef Emam, undergraduate student Amy Yao, and postdoctoral researcher Sean Wilson, considered multiple locomotion techniques for the SlothBot. Wheeled robots are common, but in the natural world they can easily be defeated by obstacles like rocks or mud. Flying robots require too much energy to linger for long. That’s why Egerstedt’s observation of the wire-crawling sloths was so important.

“It’s really fascinating to think about robots becoming part of the environment, a member of an ecosystem,” he said. “While we’re not building an anatomical replica of the living sloth, we believe our robot can be integrated to be part of the ecosystem it’s observing like a real sloth.”

The SlothBot launched in the Atlanta Botanical Garden is the second version of a system originally reported in May 2019 at the International Conference on Robotics and Automation. That robot was a much smaller laboratory prototype.

Self-driving cars that recognize free space can better detect objects

It’s important that self-driving cars quickly detect other cars or pedestrians sharing the road. Researchers at Carnegie Mellon University have shown that they can significantly improve detection accuracy by helping the vehicle also recognize what it doesn’t see.

The very fact that objects in your sight may obscure your view of things that lie further ahead is blindingly obvious to people. But Peiyun Hu, a Ph.D. student in CMU’s Robotics Institute, said that’s not how self-driving cars typically reason about objects around them.

Rather, they use 3D data from lidar to represent objects as a point cloud and then try to match those point clouds to a library of 3D representations of objects. The problem, Hu said, is that the 3D data from the vehicle’s lidar isn’t really 3D — the sensor can’t see the occluded parts of an object, and current algorithms don’t reason about such occlusions.

“Perception systems need to know their unknowns,” Hu observed.

Hu’s work enables a self-driving car’s perception systems to consider visibility as it reasons about what its sensors are seeing. In fact, reasoning about visibility is already used when companies build digital maps.

“Map-building fundamentally reasons about what’s empty space and what’s occupied,” said Deva Ramanan, an associate professor of robotics and director of the CMU Argo AI Center for Autonomous Vehicle Research. “But that doesn’t always occur for live, on-the-fly processing of obstacles moving at traffic speeds.”

In research to be presented at the Computer Vision and Pattern Recognition (CVPR) conference, which will be held virtually June 13–19, Hu and his colleagues borrow techniques from map-making to help the system reason about visibility when trying to recognize objects.

When tested against a standard benchmark, the CMU method outperformed the previous top-performing technique, improving detection by 10.7% for cars, 5.3% for pedestrians, 7.4% for trucks, 18.4% for buses and 16.7% for trailers.

One reason previous systems may not have taken visibility into account is a concern about computation time. But Hu said his team found that was not a problem: their method takes just 24 milliseconds to run. (For comparison, each sweep of the lidar is 100 milliseconds.)

Research articles

The new June issue of Science Robotics is out!

Presenting a mini “lab” suspended on cables that moves in tandem with its flying specimen; platelet microrobots that can self-propel to target pathogens; and more to come:

Repurposing factories with robotics in the face of COVID-19: Can collaborative robots ramp up the production of medical ventilators?

Abstract Full Text

Immune evasion by designer microrobots: Recent work is unveiling the interactions between magnetic microswimmers and cells of the immune system.

Abstract Full Text

Drones against vector-borne diseases: Uncrewed aerial vehicles can reduce the cost of preventative measures against vector-borne diseases.

Abstract Full Text

Transforming platelets into microrobots: Biocompatible cell robots powered by urea improve drug delivery through active movement.

Abstract Full Text

Elucidating the interaction dynamics between microswimmer body and immune system for medical microrobots: Morphology-dependent immunogenicity obliges a compromise on the locomotion-focused design of medical microrobots.

Abstract Full Text

Field performance of sterile male mosquitoes released from an uncrewed aerial vehicle: An automatic adult mosquito release device operated from a drone released sterile males without reducing their quality.

Abstract Full Text

3D printed deformable sensors

by Zhijie Zhu, Hyun Soo Park, Michael C. McAlpine in Science Advances

Mechanical engineers and computer scientists have developed a 3D printing technique that uses motion capture technology, similar to that used in Hollywood movies, to print electronic sensors directly on organs that are expanding and contracting.

The research is published in Science Advances, a peer-reviewed scientific journal published by the American Association for the Advancement of Science (AAAS).

The new research is the next generation of a 3D printing technique discovered two years ago by members of the team that allowed for printing of electronics directly on the skin of a hand that moved left to right or rotated. The new technique allows for even more sophisticated tracking to 3D print sensors on organs like the lungs or heart that change shape or distort due to expanding and contracting.

“We are pushing the boundaries of 3D printing in new ways we never even imagined years ago,” said Michael McAlpine, a University of Minnesota mechanical engineering professor and senior researcher on the study. “3D printing on a moving object is difficult enough, but it was quite a challenge to find a way to print on a surface that was deforming as it expanded and contracted.”

The researchers started in the lab with a balloon-like surface and a specialized 3D printer. They used motion capture tracking markers, much like those used in movies to create special effects, to help the 3D printer adapt its printing path to the expansion and contraction movements on the surface. The researchers then moved on to an animal lung in the lab that was artificially inflated. They were able to successfully print a soft hydrogel-based sensor directly on the surface. McAlpine said the technique could also possibly be used in the future to 3D print sensors on a pumping heart.

“The broader idea behind this research, is that this is a big step forward to the goal of combining 3D printing technology with surgical robots,” said McAlpine, who holds the Kuhrmeyer Family Chair Professorship in the University of Minnesota Department of Mechanical Engineering. “In the future, 3D printing will not be just about printing but instead be part of a larger autonomous robotic system. This could be important for diseases like COVID-19 where health care providers are at risk when treating patients.”

Other members of the research team included lead author Zhijie Zhu, a University of Minnesota mechanical engineering Ph.D. candidate, and Hyun Soo Park, an assistant professor in the University of Minnesota Department of Computer Science and Engineering.

Brainsourcing: Crowdsourcing Recognition Tasks via Collaborative Brain-Computer Interfacing

by Keith M. Davis, Lauri Kangassalo, Michiel Spapé, Tuukka Ruotsalo at ACM Conference on Human Factors in Computing Systems

Researchers have developed a technique, using artificial intelligence, to analyze opinions and draw conclusions using the brain activity of groups of people. This technique, which the researchers call ‘’brainsourcing’’, can be used to classify images or recommend content, something that has not been demonstrated before.

This paper introduces brainsourcing: utilizing brain responses of a group of human contributors each performing a recognition task to determine classes of stimuli. Researchers investigate to what extent it is possible to infer reliable class labels using data collected utilizing electroencephalography (EEG) from participants given a set of common stimuli. An experiment (N=30) measuring EEG responses to visual features of faces (gender, hair color, age, smile) revealed an improved F1 score of 0.94 for a crowd of twelve participants compared to an F1 score of 0.67 derived from individual participants and a random chance of 0.50. Our results demonstrate the methodological and pragmatic feasibility of brainsourcing in labeling tasks and opens avenues for more general applications using brain-computer interfacing in a crowdsourced setting.

Brainsourcing utilizes brain responses of a group of human contributors each performing a recognition task to determine the consensus label of a stimulus.

“We wanted to investigate whether crowdsourcing can be applied to image recognition by utilising the natural reactions of people without them having to carry out any manual tasks with a keyboard or mouse,” says Academy Research Fellow Tuukka Ruotsalo from the University of Helsinki.

“Our approach is limited by the technology available,” says Keith Davis, a student and research assistant at the University of Helsinki.

“Current methods to measure brain activity are adequate for controlled setups in a laboratory, but the technology needs to improve for everyday use. Additionally, these methods only capture a very small percentage of total brain activity. As brain imaging technologies improve, it may become possible to capture preference information directly from the brain. Instead of using conventional ratings or like buttons, you could simply listen to a song or watch a show, and your brain activity alone would be enough to determine your response to it.”

Coupling magnetic and plasmonic anisotropy in hybrid nanorods for mechanochromic responses

by Zhiwei Li, Jianbo Jin, Fan Yang, Ningning Song & Yadong Yin in Nature Communications

A new film made of gold nanoparticles changes color in response to any type of movement. Its unprecedented qualities could allow robots to mimic chameleons and octopi — among other futuristic applications.

Mechanochromic response is of great importance in designing bionic robot systems and colorimetric devices. Unfortunately, compared to mimicking motions of natural creatures, fabricating mechanochromic systems with programmable colorimetric responses remains challenging. Herein, scientists report the development of unconventional mechanochromic films based on hybrid nanorods integrated with magnetic and plasmonic anisotropy. Magnetic-plasmonic hybrid nanorods have been synthesized through a unique space-confined seed-mediated process, which represents an open platform for preparing next-generation complex nanostructures. By coupling magnetic and plasmonic anisotropy, the plasmonic excitation of the hybrid nanorods could be collectively regulated using magnetic fields. It facilitates convenient incorporation of the hybrid nanorods into polymer films with a well-controlled orientation and enables sensitive colorimetric changes in response to linear and angular motions. The combination of unique synthesis and convenient magnetic alignment provides an advanced approach for designing programmable mechanochromic devices with the desired precision, flexibility, and scalability.

Synthesis and characterization of magnetic-plasmonic hybrid nanostructures.

a Scheme of the confined growth towards magnetic-plasmonic hybrid nanorods. In the last step of the scheme, Fe3O4 nanorod and RF shell are removed to clarify the concave structure of the Au nanorod. TEM images of nanorods after SiO2 coating (b), RF coating (c ), seeded growth with 15 µL (d), 25 µL (e) of the precursor. f TEM image showing hybrid nanorods with two typical configurations (left: side by side; right: overlapped). g HAADF and EDS mapping images of the hybrid structures. h The cross-sectional line profile of element distribution. i The real-time extinction spectra of cAuNRs with a time interval of 15 s. j Dependence of peak positions of surface plasmonic resonance and aspect ratios of cAuNRs on the volume of the precursor. The reaction kinetics is controlled by adding different amounts of precursors as indicated. Error bars represent the standard deviations from the measurement of ten hybrids nanorods in TEM images.

An energy landscape approach to locomotor transitions in complex 3D terrain

by Ratan Othayoth, George Thoms, Chen Liin Proceedings of the National Academy of Sciences

By chasing cockroaches through an obstacle course and studying their movements, the engineers that brought you the cockroach robot and the snake robot discovered that animals’ movement transitions corresponded to overcoming potential energy barriers and that they can jitter around to traverse obstacles in complex terrain.

Effective locomotion in nature happens by transitioning across multiple modes (e.g., walk, run, climb). Using laboratory experiments on a model system, scientists demonstrate that an energy landscape approach helps understand how multipathway transitions across locomotor modes in complex 3D terrain statistically emerge from physical interaction. Animals’ and robots’ locomotor modes are attracted to basins of a potential energy landscape. They can use kinetic energy fluctuation from oscillatory self-propulsion to cross potential energy barriers, escaping from one basin and reaching another to make locomotor transitions. The first-principle energy landscape approach is the beginning of a statistical physics theory of locomotor transitions in complex terrain. It will help understand and predict how animals, and how robots should, move through the real world.

Abstract: Effective locomotion in nature happens by transitioning across multiple modes (e.g., walk, run, climb). Despite this, far more mechanistic understanding of terrestrial locomotion has been on how to generate and stabilize around near–steady-state movement in a single mode. We still know little about how locomotor transitions emerge from physical interaction with complex terrain. Consequently, robots largely rely on geometric maps to avoid obstacles, not traverse them. Recent studies revealed that locomotor transitions in complex three-dimensional (3D) terrain occur probabilistically via multiple pathways. Here, scientists show that an energy landscape approach elucidates the underlying physical principles. They discovered that locomotor transitions of animals and robots self-propelled through complex 3D terrain correspond to barrier-crossing transitions on a potential energy landscape. Locomotor modes are attracted to landscape basins separated by potential energy barriers. Kinetic energy fluctuation from oscillatory self-propulsion helps the system stochastically escape from one basin and reach another to make transitions. Escape is more likely toward lower barrier direction. These principles are surprisingly similar to those of near-equilibrium, microscopic systems. Analogous to free-energy landscapes for multipathway protein folding transitions, our energy landscape approach from first principles is the beginning of a statistical physics theory of multipathway locomotor transitions in complex terrain. This will not only help understand how the organization of animal behavior emerges from multiscale interactions between their neural and mechanical systems and the physical environment, but also guide robot design, control, and planning over the large, intractable locomotor-terrain parameter space to generate robust locomotor transitions through the real world.

“Our findings will help make robots more robust and widen their range of movement in the real world,” says Chen Li, physicist, assistant professor of mechanical engineering at The Johns Hopkins University and the paper’s senior author.

“Search and rescue robots can’t operate solely by avoiding obstacles, like a vacuum robot would try to avoid a couch,” says Ratan Othayoth, a graduate student in Li’s lab and the study’s first author. “These robots have to go through rubble, and to do so, they have to use different types of movement in three dimensions.”

Adaptive characterization of spatially inhomogeneous fields and errors in qubit registers

by Riddhi Swaroop Gupta, Claire L. Edmunds, Alistair R. Milne, Cornelius Hempel, Michael J. Biercuk in npj Quantum Information

Scientists at the University of Sydney have adapted techniques from autonomous vehicles and robotics to efficiently assess the performance of quantum devices, an important process to help stabilise the emerging technologies.

New quantum computing architectures consider integrating qubits as sensors to provide actionable information useful for calibration or decoherence mitigation on neighboring data qubits, but little work has addressed how such schemes may be efficiently implemented in order to maximize information utilization. Techniques from classical estimation and dynamic control, suitably adapted to the strictures of quantum measurement, provide an opportunity to extract augmented hardware performance through automation of low-level characterization and control. In this work, we present an adaptive learning framework, Noise Mapping for Quantum Architectures (NMQA), for scheduling of sensor–qubit measurements and efficient spatial noise mapping (prior to actuation) across device architectures. Via a two-layer particle filter, NMQA receives binary measurements and determines regions within the architecture that share common noise processes; an adaptive controller then schedules future measurements to reduce map uncertainty. Numerical analysis and experiments on an array of trapped ytterbium ions demonstrate that NMQA outperforms brute-force mapping by up to 20× (3×) in simulations (experiments), calculated as a reduction in the number of measurements required to map a spatially inhomogeneous magnetic field with a target error metric. As an early adaptation of robotic control to quantum devices, this work opens up exciting new avenues in quantum computer science.

Difference between a naive brute force and the NMQA measurement strategy in reconstructing an inhomogeneous background field. A spatial arrangement of qubits (red circles) is shown with a true unknown field with finite correlations (colored regions). a The naive strategy measures the field across the array using a regular grid (red filled circles). b The NMQA strategy iteratively chooses which qubit to measure next (black arrows), and additionally stores state-estimation information in a form shared across local neighborhoods (white shaded circles), which reflects the spatial characteristics of the underlying map.

Physics-enhanced neural networks learn order and chaos

by Anshul Choudhary, John F. Lindner, Elliott G. Holliday, Scott T. Miller, Sudeshna Sinha, William L. Ditto in Physical Review E

Teaching physics to neural networks enables those networks to better adapt to chaos within their environment. The work has implications for improved artificial intelligence (AI) applications ranging from medical diagnostics to automated drone piloting.

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

The drawback to this neural network training is something called “chaos blindness” — an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to “see” chaos within a system and adapt accordingly.

Simply put, the Hamiltonian embodies the complete information about a dynamic physical system — the total amount of all the energies present, kinetic and potential. Picture a swinging pendulum, moving back and forth in space over time. Now look at a snapshot of that pendulum. The snapshot cannot tell you where that pendulum is in its arc or where it is going next. Conventional neural networks operate from a snapshot of the pendulum. Neural networks familiar with the Hamiltonian flow understand the entirety of the pendulum’s movement — where it is, where it will or could be, and the energies involved in its movement.

In a proof-of-concept project, the NAIL team incorporated Hamiltonian structure into neural networks, then applied them to a known model of stellar and molecular dynamics called the Hénon-Heiles model. The Hamiltonian neural network accurately predicted the dynamics of the system, even as it moved between order and chaos.

“The Hamiltonian is really the ‘special sauce’ that gives neural networks the ability to learn order and chaos,” says John Lindner, visiting researcher at NAIL, professor of physics at The College of Wooster and corresponding author of a paper describing the work. “With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems.”

The work appears in Physical Review E and is supported in part by the Office of Naval Research (grant N00014–16–1–3066). NC State postdoctoral researcher Anshul Choudhary is first author. Bill Ditto, professor of physics at NC State, is director of NAIL. Visiting researcher Scott Miller; Sudeshna Sinha, from the Indian Institute of Science Education and Research Mohali; and NC State graduate student Elliott Holliday also contributed to the work.

Hamiltonian flow. (a) Hénon-Heiles orbit wrapped many times around the hypertorus appears to intersect at the creases in this 3D projection. (b) Different colors indicating the fourth dimension show that the apparent intersections are actually separated in 4D phase space.

Videos

Some impressive work here from IHMC and IIT getting Atlas to take steps upward in a way that’s much more human-like than robot-like, which ends up reducing maximum torque requirements by 20 percent.

GITAI’s G1 is the space dedicated general-purpose robot. G1 robot will enable automation of various tasks internally & externally on space stations and for lunar base development.

The Korea Atomic Energy Research Institute is not messing around with ARMstrong, their robot for nuclear and radiation emergency response.

Harmony’s Preprogrammed Exercises promotes functional treatment through patient-specific movements that can enable an increased number of repetitions per session without placing a larger physical burden on therapists or their resources. As the only rehabilitation exoskeleton with Bilateral Sync Therapy (BST), Harmony enables intent-based therapy by registering healthy arm movements and synchronizing that motion onto the stroke-affected side to help reestablish neural pathways.

Two more interviews this week of celebrity roboticists from MassRobotics: Helen Greiner and Marc Raibert.

Upcoming events

CLAWAR 2020 — August 24–26, 2020 — Moscow, Russia
ICUAS 2020 — September 1–4, 2020 — Athens, Greece
ICRES 2020 — September 28–29, 2020 — Taipei, Taiwan
ICSR 2020 — November 14–16, 2020 — Golden, Colorado

MISC

Subscribe to detailed companies’ updates by Paradigm!

Medium. Twitter. Telegram. Reddit.

Main sources

Research articles

Science Robotics

Science Daily

--

--