RT/ A biomimetic eye with a hemispherical perovskite nanowire array retina

Paradigm
Paradigm
Published in
17 min readMay 28, 2020

Robotics biweekly vol. 5, 14th May — 28th May

TL;DR

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Research articles

A biomimetic eye with a hemispherical perovskite nanowire array retina

by Leilei Gu, Swapnadeep Poddar, Yuanjing Lin, Zhenghao Long, Daquan Zhang, Qianpeng Zhang, Lei Shu, Xiao Qiu, Matthew Kam, Ali Javey & Zhiyong Fan in Nature

A team of researchers has built an artificial eye that uses perovskite nanowires, with capabilities that come close to those of the human eye. In their paper, the group describes developing the eye and how well it compares to its human counterpart.

Human eyes possess exceptional image-sensing characteristics such as an extremely wide field of view, high resolution and sensitivity with low aberration. Biomimetic eyes with such characteristics are highly desirable, especially in robotics and visual prostheses. However, the spherical shape and the retina of the biological eye pose an enormous fabrication challenge for biomimetic devices. Researchers present an electrochemical eye with a hemispherical retina made of a high-density array of nanowires mimicking the photoreceptors on a human retina. The device design has a high degree of structural similarity to a human eye with the potential to achieve high imaging resolution when individual nanowires are electrically addressed. Additionally, they demonstrate the image-sensing function of our biomimetic device by reconstructing the optical patterns projected onto the device. This work may lead to biomimetic photosensing devices that could find use in a wide spectrum of technological applications.

Micro-rocket robot with all-optic actuating and tracking in blood

by Dengfeng Li, Chao Liu, Yuanyuan Yang, Lidai Wang & Yajing Shen in Light: Science & Applications volume

Micro/nanorobots have long been expected to reach all parts of the human body through blood vessels for medical treatment or surgery. However, in the current stage, it is still challenging to drive a microrobot in viscous media at high speed and difficult to observe the shape and position of a single microrobot once it enters the bloodstream. Scientists propose a new micro-rocket robot and an all-optic driving and imaging system that can actuate and track it in blood with microscale resolution. To achieve a high driving force, we engineer the microrobot to have a rocket-like triple-tube structure. Owing to the interface design, the 3D-printed micro-rocket can reach a moving speed of 2.8 mm/s (62 body lengths per second) under near-infrared light actuation in a blood-mimicking viscous glycerol solution. They also show that the micro-rocket robot is successfully tracked at a 3.2-µm resolution with an optical-resolution photoacoustic microscope in blood. This work paves the way for microrobot design, actuation, and tracking in the blood environment, which may broaden the scope of microrobotic applications in the biomedical field.

Inflatable soft jumper inspired by shell snapping

by Benjamin Gorissen, David Melancon, Nikolaos Vasios,Mehdi Torbati and Katia Bertoldi in Science Robotics

Despite knowing when life first appeared on Earth, scientists still do not understand how life occurred, which has important implications for the likelihood of finding life elsewhere in the universe. A new paper shows how an analysis using a statistical technique called Bayesian inference could shed light on how complex extraterrestrial life might evolve in alien worlds.

Fluidic soft actuators are enlarging the robotics toolbox by providing flexible elements that can display highly complex deformations. Although these actuators are adaptable and inherently safe, their actuation speed is typically slow because the influx of fluid is limited by viscous forces. To overcome this limitation and realize soft actuators capable of rapid movements, scientists focused on spherical caps that exhibit isochoric snapping when pressurized under volume-controlled conditions. First, they noted that this snap-through instability leads to both a sudden release of energy and a fast cap displacement. Inspired by these findings, they investigated the response of actuators that comprise such spherical caps as building blocks and observed the same isochoric snapping mechanism upon inflation. Last, researchers demonstrated that this instability can be exploited to make these actuators jump even when inflated at a slow rate. The study provides the foundation for the design of an emerging class of fluidic soft devices that can convert a slow input signal into a fast output deformation.

“Soft robots have enormous potential for a wide spectrum of applications, ranging from minimally invasive surgical tools and exoskeletons to warehouse grippers and video game add-ons,” said Benjamin Gorissen, a postdoctoral fellow at SEAS and co-first author of the paper. “But applications for today’s soft actuators are limited by their speed.”

“In this work, we showed that we can harness elastic instabilities to overcome this restriction, enabling us to decouple the slow input from the output and make a fast-jumping fluidic soft actuator,” said David Melancon, a graduate student at SEAS and co-first author of the paper.

“This actuator is a building block that could be integrated into a fully soft robotic system to give soft robots that can already crawl, walk and swim the ability to jump,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. “By incorporating our jumper into these designs, these robots could navigate safely through uncharted landscapes.”

An objective Bayesian analysis of life’s early start and our late arrival

by David Kipping in PNAS

A landmark review of the role of artificial intelligence (AI) in the future of global health calls on the global health community to establish guidelines for development and deployment of new technologies and to develop a human-centered research agenda to facilitate equitable and ethical use of AI.

Does life’s early emergence mean that it would reappear quickly if we were to rerun Earth’s clock? If the timescale for intelligence evolution is very slow, then a quick start to life is actually necessary for our existence — and thus does not necessarily mean it is a generally quick process. Employing objective Bayesianism and a uniform-rate process assumption, we use just the chronology of life’s appearance in the fossil record, that of ourselves, and Earth’s habitability window to infer the true underlying rates accounting for this subtle selection effect. Research results find betting odds of >3:1 that abiogenesis is indeed a rapid process versus a slow and rare scenario, but 3:2 odds that intelligence may be rare.

Abstract: Life emerged on Earth within the first quintile of its habitable window, but a technological civilization did not blossom until its last. Efforts to infer the rate of abiogenesis, based on its early emergence, are frustrated by the selection effect that if the evolution of intelligence is a slow process, then life’s early start may simply be a prerequisite to our existence, rather than useful evidence for optimism. In this work, researchers interpret the chronology of these two events in a Bayesian framework, extending upon previous work by considering that the evolutionary timescale is itself an unknown that needs to be jointly inferred, rather than fiducially set. They further adopt an objective Bayesian approach, such that our results would be agreed upon even by those using wildly different priors for the rates of abiogenesis and evolution — common points of contention for this problem. It is then shown that the earliest microfossil evidence for life indicates that the rate of abiogenesis is at least 2.8 times more likely to be a typically rapid process, rather than a slow one. This modest limiting Bayes factor rises to 8.7 if the more disputed evidence of 13C-depleted zircon deposits is accepted. For intelligence evolution, it is found that a rare-intelligence scenario is slightly favored at 3:2 betting odds. Thus, if hey reran Earth’s clock, one should statistically favor life to frequently reemerge, but intelligence may not be as inevitable.

(A) Bayes factor for a model where life emerges rapidly (λL≫1/t′L) versus slowly (λL≪1/t′L) on Earth. (A) A quick start is favored by at least a factor of 3 conditioned upon early microfossil evidence, independent of our assumptions regarding the evolutionary timescale of intelligent observers and priors on the abiogenesis rate. (B) Bayes factor of a scenario where intelligent observers typically emerge on a much longer timescale than occurred on Earth, versus the ensemble of possibilities. There is a weak preference for a rare intelligence scenario.

Toward neuroprosthetic real-time communication from in silico to biological neuronal network via patterned optogenetic stimulation

by Yossi Mosbacher, Farad Khoyratee, Miri Goldin, Sivan Kanner, Yenehaetra Malakai, Moises Silva, Filippo Grassia, Yoav Ben Simon, Jesus Cortes, Ari Barzilai, Timothée Levi & Paolo Bonifazi in Scientific Reports

Researchers have created a way for artificial neuronal networks to communicate with biological neuronal networks. The new system converts artificial electrical spiking signals to a visual pattern than is then used to entrain the real neurons via optogenetic stimulation of the network. This advance will be important for future neuroprosthetic devices that replace damages neurons with artificial neuronal circuitry.

Restoration of the communication between brain circuitry is a crucial step in the recovery of brain damage induced by traumatic injuries or neurological insults. In this work researchers present a study of real-time unidirectional communication between a spiking neuronal network (SNN) implemented on digital platform and an in-vitro biological neuronal network (BNN), generating similar spontaneous patterns of activity both spatial and temporal. The communication between the networks was established using patterned optogenetic stimulation via a modified digital light projector (DLP) receiving real-time input dictated by the spiking neurons’ state. Each stimulation consisted of a binary image composed of 8 × 8 squares, representing the state of 64 excitatory neurons. The spontaneous and evoked activity of the biological neuronal network was recorded using a multi-electrode array in conjunction with calcium imaging. The image was projected in a sub-portion of the cultured network covered by a subset of the all electrodes. The unidirectional information transmission (SNN to BNN) is estimated using the similarity matrix of the input stimuli and output firing. Information transmission was studied in relation to the distribution of stimulus frequency and stimulus intensity, both regulated by the spontaneous dynamics of the SNN, and to the entrainment of the biological networks. They demonstrate that high information transfer from SNN to BNN is possible and identify a set of conditions under which such transfer can occur, namely when the spiking network synchronizations drive the biological synchronizations (entrainment) and in a linear regime response to the stimuli. This research provides further evidence of possible application of miniaturized SNN in future neuro-prosthetic devices for local replacement of injured micro-circuitries capable to communicate within larger brain networks.

Scheme of the experimental set-up. (A) The experimental set-up is composed by the FPGA board on which the spiking neural network is running, the video-projecting system (a Digital Micromirror Device-projector (DMD)) organized around an upright microscope (right image) and the biological neuronal network (culture) grown on a multi-electrode array. Neural activity is monitored by electrical recordings and calcium imaging. The perspectives of this set-up is to add a real-time features extraction from the biological neuronal network to stimulate the spiking neural network, in order to achieve a real-time bi-directional communication between the two networks. (B) Neuromorphic board schematic block. FPGA board includes different VHDL modules: SNN core (neuron, synapse, plasticity, axonal delay and synaptic noise), UART, VGA and switch communication, burst detector module. All parameters are stored in RAM and are updated every 1 ms. The burst detector module sends triggering signal to the STG stimulator and enable signal for 8 × 8 matrix image. The SNN core module sends raster plots updated at a defined interval via UART and neuron activity for VGA image module. © Left Scheme of the optical path and TTL-control of the video-projection. The FPGA board (P) sends two simultaneous output: the VGA video signal (F) to the video-projector (E, Modified Sharp Notevision XR-10X DMD) coding for the image representing the 8 × 8 binary matrix shaped by the SNN activity, and the TTL signal triggering the stimulator (A). The stimulator generate the signal (B) with 5 pulses of 5 V lasting 30 ms with an interval of 40 ms which control the custom-made power supply of the Luminus PT-120-B High Power blue LED (D). The image generated by the video-projector (G) is at the focal length (250 mm) of the coupling lens (H) located in front of the cube (I) with the long-pass dichroic mirror and the emission filter (located above the cube). The other cube (J) contains the dichroic mirror and excitation notch filter (dsRed) for red calcium imaging. The image (G) is focused at the adjustable stage (L) of the microscope (L) through a 10x objective (K). The sample image located at the stage (L) is recorded by the camera mounted on the microscope (N) after being focused by the tube lens M. A PC (O) recorded the camera images. © Right Picture of the optical set-up including DMD projector which projects into an up-right epifluorescence microscope through an additional optical pathway obtained in between the camera and the excitation/dichroic cube placed above the neuron culture, by splitting orthogonally the camera pathway with a dichroic mirror.

Combined muscles and sensors made from soft materials allow for flexible robots

Materials provided by University of Tokyo

Robots can be made from soft materials, but the flexibility of such robots is limited by the inclusion of rigid sensors necessary for their control. Researchers created embedded sensors, to replace rigid sensors, that offer the same functionality but afford the robot greater flexibility. Soft robots can be more adaptable and resilient than more traditional rigid designs. The team used cutting-edge machine learning techniques to create their design.

Automation is an increasingly important subject, and core to this concept are the often paired fields of robotics and machine learning. The relationship between machine learning and robotics is not just limited to the behavioral control of robots, but is also important for their design and core functions. A robot which operates in the real world needs to understand its environment and itself in order to navigate and perform tasks.

If the world was entirely predictable, then a robot would be fine moving around without the need to learn anything new about its environment. But reality is unpredictable and ever changing, so machine learning helps robots adapt to unfamiliar situations. Although this is theoretically true for all robots, it is especially important for soft-bodied robots as the physical properties of these are intrinsically less predictable than their rigid counterparts.

“Take for example a robot with pneumatic artificial muscles (PAM), rubber and fiber-based fluid-driven systems which expand and contract to move,” said Associate Professor Kohei Nakajima from the Graduate School of Information Science and Technology. “PAMs inherently suffer random mechanical noise and hysteresis, which is essentially material stress over time. Accurate laser-based monitors help maintain control through feedback, but these rigid sensors restrict a robot’s movement, so we came up with something new.”

Nakajima and his team thought if they could model a PAM in real time, then they could maintain good control of it. However, given the ever-changing nature of PAMs, this is not realistic with traditional methods of mechanical modeling. So the team turned to a powerful and established machine learning technique called reservoir computing. This is where information about a system, in this case the PAM, is fed into a special artificial neural network in real time, so the model is ever changing and thus adapts to the environment.

“We found the electrical resistance of PAM material changes depending on its shape during a contraction. So we pass this data to the network so it can accurately report on the state of the PAM,” said Nakajima. “Ordinary rubber is an insulator, so we incorporated carbon into our material to more easily read its varying resistance. We found the system emulated the existing laser-displacement sensor with equally high accuracy in a range of test conditions.”

Thanks to this method, a new generation of soft robotic technology may be possible. This could include robots that work with humans, for example wearable rehabilitation devices or biomedical robots, as the extra soft touch means interactions with them are gentle and safe.

“Our study suggests reservoir computing could be used in applications besides robotics. Remote-sensing applications, which need real-time information processed in a decentralized manner, could greatly benefit,” said Nakajima. “And other researchers who study neuromorphic computing — intelligent computer systems — might also be able to incorporate our ideas into their own work to improve the performance of their systems.”

Dynamic simulation of articulated soft robots

by Weicheng Huang, Xiaonan Huang, Carmel Majidi & M. Khalid Jawed in Nature Communications

Scientists have adapted sophisticated computer graphics technology, used to create hair and fabric in animated films, to simulate the movements of soft, limbed robots for the first time. The advance is a major step toward such robots that are autonomous.

Soft robots are primarily composed of soft materials that can allow for mechanically robust maneuvers that are not typically possible with conventional rigid robotic systems. However, owing to the current limitations in simulation, design and control of soft robots often involve a painstaking trial. With the ultimate goal of a computational framework for soft robotic engineering, here researchers introduce a numerical simulation tool for limbed soft robots that draws inspiration from discrete differential geometry based simulation of slender structures. The simulation incorporates an implicit treatment of the elasticity of the limbs, inelastic collision between a soft body and rigid surface, and unilateral contact and Coulombic friction with an uneven surface. The computational efficiency of the numerical method enables it to run faster than real-time on a desktop processor. Their experiments and simulations show quantitative agreement and indicate the potential role of predictive simulations for soft robot design.

a Geometric discretization of soft rolling robot. b The bending curvature at ith node is κi = 1/Ri = 2tan(ϕi/2)/Δl. c Coulomb law for frictional contact.

Comedians in Cafes Getting Data: Evaluating Timing and Adaptivity in Real-World Robot Comedy Performance

by John Vilk, Naomi T Fitter in ACM/IEEE International Conference on Human-Robot Interaction

Standup comedian Jon the Robot likes to tell his audiences that he does lots of auditions but has a hard time getting bookings

Social robots and autonomous social agents are becoming more ingrained in our everyday lives. Interactive agents from Siri to Anki’s Cozmo robot include the ability to tell jokes to engage users. This ability will build in importance as in-home social agents take on more intimate roles, so it is important to gain a greater understanding of how robots can best use humor. Stand-up comedy provides a naturally-structured experimental context for initial studies of robot humor. In this preliminary work, researchers aimed to compare audience responses to a robotic stand-up comedian over multiple performances that varied robot timing and adaptivity. The first study of 22 performances in the wild showed that a robot with good timing was significantly funnier. A second study of 10 performances found that an adaptive performance was not necessarily funnier, although adaptations almost always improved audience perception of individual jokes. The end result of this research provides key clues for how social robots can best engage people with humor.

More from Science Robotics:

20 MAY 2020 VOL 5, ISSUE 42

Multifunctional surface microrollers for targeted cargo delivery in physiological blood flow

BY YUNUS ALAPAN, UGUR BOZUYUK, PELIN ERKOC, ALP CAN KARACAKOL, METIN SITTI

Leukocyte-inspired microrollers enable upstream propulsion, controlled navigation, and targeted active drug delivery in blood flow.

Abstract Full Text PDF

Inflatable soft jumper inspired by shell snapping

BY BENJAMIN GORISSEN, DAVID MELANCON, NIKOLAOS VASIOS, MEHDI TORBATI, KATIA BERTOLDI

Isochoric snapping of elastomeric spherical caps enables fast actuation in fluidic soft robots.

Abstract Full Text PDF

Material remodeling and unconventional gaits facilitate locomotion of a robophysical rover over granular terrain

BY SIDDHARTH SHRIVASTAVA, ANDRAS KARSAI, YASEMIN OZKAN AYDIN, ROSS PETTINGER, WILLIAM BLUETHMANN, ROBERT O. AMBROSE, DANIEL I. GOLDMAN

A laboratory model of the NASA Resource Prospector climbs loose sandy slopes via dynamic terrain remodeling.

Abstract Full Text PDF

Bioinspired underwater legged robot for seabed exploration with low environmental disturbance

BY G. PICARDI, M. CHELLAPURATH, S. IACOPONI, S. STEFANNI, C. LASCHI, M. CALISTI

An underwater legged robot, inspired by benthic animals, opens new perspectives for seabed exploration.

Abstract Full Text PDF

Videos

Rocos is a robotics company based in New Zealand, and they have a Spot that’s trying to solve the biggest crisis that country has: sheep.

The new video shows a robot teddy bear that uses an improved Bayesian Interaction Primitives approach to learn how to hug a person. Given a small number of examples, the robot can learn to generalize to different interaction partners in space and time.

Upcoming events

ICRA 2020 — June 01, 2020 — [Virtual Conference]
RSS 2020 — July 12–16, 2020 — [Virtual Conference]
CLAWAR 2020 — August 24–26, 2020 — Moscow, Russia
ICUAS 2020 — September 1–4, 2020 — Athens, Greece
ICRES 2020 — September 28–29, 2020 — Taipei, Taiwan
ICSR 2020 — November 14–16, 2020 — Golden, Colorado

Subscribe to detailed companies’ updates by Paradigm!

Medium. Twitter. Telegram. Reddit.

Main sources

Research articles

Science Robotics

Science Daily

--

--