Paradigm
Published in

Paradigm

RT/ Nanotech scientists create world’s smallest origami bird

Robotics biweekly vol.25, 11th February — 25th March

TL;DR

  • Researchers have created micron-sized shape memory actuators that enable atomically thin two-dimensional materials to fold themselves into 3D configurations. All they require is a quick jolt of voltage. And once the material is bent, it holds its shape — even after the voltage is removed.
  • Engineers have designed a system of self-oscillating flexible materials that display a distinctive mode of dynamic self-organization. In addition to exhibiting the swarmalator behavior, the component materials mutually adapt their overall shapes as they interact in a fluid-filled chamber. These systems can pave the way for fabricating collaborative, self-regulating soft robotic systems.
  • An AI tool offers new opportunities for analyzing images taken with microscopes. A study shows that the tool, which has already received international recognition, can fundamentally change microscopy and pave the way for new discoveries and areas of use within both research and industry.
  • Robotics researchers are developing exoskeletons and prosthetic legs capable of thinking and moving on their own using sophisticated artificial intelligence technology.
  • Artificial intelligence is part of our modern life. A crucial question for practical applications is how fast such intelligent machines can learn. An experiment has answered this question, showing that quantum technology enables a speed-up in the learning process. The physicists have achieved this result by using a quantum processor for single photons as a robot.
  • How do you turn ‘dumb’ headphones into smart ones? Engineers have invented a cheap and easy way by transforming headphones into sensors that can be plugged into smartphones, identify their users, monitor their heart rates and perform other services. Their invention, called HeadFi, is based on a small plug-in headphone adapter that turns a regular headphone into a sensing device.
  • Researchers have developed a new tissue-section analysis system for diagnosing breast cancer based on artificial intelligence. For the first time, morphological, molecular and histological data are integrated in a single analysis. Furthermore, the system provides a clarification of the AI decision process in the form of heatmaps.
  • Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.
  • DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.
  • A couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976.
  • Check out robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Latest News & Researches

Micrometer-sized electrically programmable shape-memory actuators for low-power microrobotics

by Qingkun Liu, Wei Wang, Michael F. Reynolds, Michael C. Cao, Marc Z. Miskin, Tomas A. Arias, David A. Muller, Paul L. McEuen, Itai Cohen in Science Robotics

If you want to build a fully functional nanosized robot, you need to incorporate a host of capabilities, from complicated electronic circuits and photovoltaics to sensors and antennas.

But just as importantly, if you want your robot to move, you need it to be able to bend.

Cornell researchers have created micron-sized shape memory actuators that enable atomically thin two-dimensional materials to fold themselves into 3D configurations. All they require is a quick jolt of voltage. And once the material is bent, it holds its shape — even after the voltage is removed.

As a demonstration, the team created what is potentially the world’s smallest self-folding origami bird. And it’s not a lark.

The paper’s lead author is postdoctoral researcher Qingkun Liu. The project is led by Itai Cohen, professor of physics, and Paul McEuen, the John A. Newman Professor of Physical Science. McEuen and Cohen’s ongoing collaboration has so far generated a throng of nanoscale machines and components, each seemingly faster, smarter and more elegant than the last.

“We want to have robots that are microscopic but have brains on board. So that means you need to have appendages that are driven by complementary metal-oxide-semiconductor (CMOS) transistors, basically a computer chip on a robot that’s 100 microns on a side,” Cohen said.

Imagine a million fabricated microscopic robots releasing from a wafer that fold themselves into shape, crawl free, and go about their tasks, even assembling into more complicated structures. That’s the vision.

“The hard part is making the materials that respond to the CMOS circuits,” Cohen said. “And this is what Qingkun and his colleagues have done with this shape memory actuator that you can drive with voltage and make it hold a bent shape.”

The machines fold themselves fast, within 100 milliseconds. They can also flatten and refold themselves thousands of times. And they only need a single volt to be powered to life.

The team has already been recognized by Guinness World Records for creating the smallest walking robot. Now, they hope to capture another record with a new self-folding origami bird that is only 60 microns wide.

The team is currently working to integrate their shape memory actuators with circuits to make walking robots with foldable legs as well as sheet-like robots that move by undulating forward. These innovations may someday lead to nano-Roomba-type robots that can clean bacterial infection from human tissue, micro-factories that can transform manufacturing, and robotic surgical instruments that are ten times smaller than current devices, according to Cohen.

Shape-memory SEAs: Composition, structure, and basic operation.

(A) Literature survey of the performance of voltage-driven bendable actuators. The regions with black borders show shape-memory actuators, and the regions without black borders show shape-change actuators without memory. (B) A false-color TEM image of a SEA cross section: 7 nm of platinum (red) capped on one side by a 2-nm TiO2 film (green) grown on the silicon wafer (magenta). (C )A false-color SEM image of a SEA microgripper that has been set to the reduced state and then removed from solution via critical point drying. In the reduced state, SEAs are bent away from the inert layer. (D) A false-color SEM image of a SEA microgripper that has been set to the oxidized state and then removed from solution via critical point drying. The platinum oxide causes the microgripper to flatten. The insets of (C )and (D) show a cross-sectional schematic of the SEA hinge in the reduced and oxidized states, respectively. The SEA hinge holds each of these states even when the voltage is removed from the device. (E) Schematic of the experiment. SEAs are patterned with panels to define hinges. Researchers apply a voltage to the actuator with a platinum/iridium probe versus a distant Ag/AgCl electrode, causing the actuator to bend. (F to J) Optical micrographs showing a SEA microgripper in action. (F) The SEA microgripper starts in the oxidized state. (G) They then apply −0.5 V versus Ag/AgCl to reduce it, causing the panels to curve up. (H) When they remove the probe, the SEA hinge remains in the reduced curved state under the OCP. (I) They oxidize the SEA hinge again by applying 1.1 V versus Ag/AgCl, causing the SEA hinge to flatten. (J) The SEA hinge remains in the oxidized, flat state when researchers remove the probe.

Chemical pumps and flexible sheets spontaneously form self-regulating oscillators in solution

by Raj Kumar Manna, Oleg E. Shklyaev, Anna C. Balazs in Proceedings of the National Academy of Sciences

During the swarming of birds or fish, each entity coordinates its location relative to the others, so that the swarm moves as one larger, coherent unit. Fireflies on the other hand coordinate their temporal behavior: within a group, they eventually all flash on and off at the same time and thus act as synchronized oscillators.

Few entities, however, coordinate both their spatial movements and inherent time clocks; the limited examples are termed “swarmalators”, which simultaneously swarm in space and oscillate in time. Japanese tree frogs are exemplar swarmalators: each frog changes both its location and rate of croaking relative to all the other frogs in a group.

Moreover, the frogs change shape when they croak: the air sac below their mouth inflates and deflates to make the sound. This coordinated behavior plays an important role during mating and hence, is vital to the frogs’ survival. In the synthetic realm there are hardly any materials systems where individual units simultaneously synchronize their spatial assembly, temporal oscillations and morphological changes. Such highly self-organizing materials are important for creating self-propelled soft robots that come together and cooperatively alter their form to accomplish a regular, repeated function.

Chemical engineers at the University of Pittsburgh Swanson School of Engineering have now designed a system of self-oscillating flexible materials that display a distinctive mode of dynamic self-organization. In addition to exhibiting the swarmalator behavior, the component materials mutually adapt their overall shapes as they interact in a fluid-filled chamber. These systems can pave the way for fabricating collaborative, self-regulating soft robotic systems.

Principal investigator is Anna C. Balazs, Distinguished Professor of Chemical and Petroleum Engineering and the John A. Swanson Chair of Engineering. Lead author is Raj Kumar Manna and co-author is Oleg E. Shklyaev, both post-doctoral associates.

“Self-oscillating materials convert a non-periodic signal into the material’s periodic motion,” Balazs explained. “Using our computer models, we first designed micron and millimeter sized flexible sheets in solution that respond to a non-periodic input of chemical reactants by spontaneously undergoing oscillatory changes in location, motion and shape. For example, an initially flat, single sheet morphs into a three-dimensional shape resembling an undulating fish tail, which simultaneously oscillates back and forth across the microchamber.”

The self-oscillations of the flexible sheets are powered by catalytic reactions in a fluidic chamber. The reactions on the surfaces of the sheet and chamber initiate a complex feedback loop: chemical energy from the reaction is converted into fluid flow, which transports and deforms the flexible sheets. The structurally evolving sheets in turn affect the motion of the fluid, which continues to deform the sheets.

“What is really intriguing is that when we introduce a second sheet, we uncover novel forms of self-organization between vibrating structures,” Manna adds. In particular, the two sheets form coupled oscillators that communicate through the fluid to coordinate not only their location and temporal pulsations, but also synchronize their mutual shape changes. This behavior is analogous to that of the tree frog swarmalators that coordinate their relative spatial location, and time of croaking, which also involves a periodic change in the frog’s shape (with an inflated or deflated throat).

“Complex dynamic behavior is a critical feature of biological systems,” Shklyaev says. Stuff does not just come together and stop moving. Analogously, these sheets assemble in the proper time and space to form a larger, composite dynamic system. Moreover, this structure is self-regulating and can perform functions that a single sheet alone cannot carry out.”

“For two or more sheets, the collective temporal oscillations and spatial behavior can be controlled by varying the size of the different sheets or the pattern of catalyst coating on the sheet,” says Balazs. These variations permit control over the relative phase of the oscillations, e.g., the oscillators can move in-phase or anti-phase.

“These are very exciting results because the 2D sheets self-morph into 3D objects, which spontaneously translate a non-oscillating signal into “instructions” for forming a larger aggregate whose shape and periodic motion is regulated by each of its moving parts,” she notes. “Our research could eventually lead to forms of bio-inspired computation — just as coupled oscillators are used to transmit information in electronics — but with self-sustained, self-regulating behavior.”

Self-oscillations of a passive sheet.

Quantitative digital microscopy with deep learning

by Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe in Applied Physics Reviews

An AI tool developed at the University of Gothenburg offers new opportunities for analysing images taken with microscopes. A study shows that the tool, which has already received international recognition, can fundamentally change microscopy and pave the way for new discoveries and areas of use within both research and industry.

The focus of the study is deep learning, a type of artificial intelligence (AI) and machine learning that we all interact with daily, often without thinking about it. For example when a new song on Spotify pops up that is similar to songs we have previously listened to or when our mobile phone camera automatically finds the best settings and corrects colours in a photo.

“Deep learning has taken the world by storm and has had a huge impact on many industries, sectors and scientific fields. We have now developed a tool that makes it possible to utilise the incredible potential of deep learning, with focus on images taken with microscopes,” says Benjamin Midtvedt, a doctoral student in physics and the main author of the study.

Deep learning can be described as a mathematical model used to solve problems that are difficult to tackle using traditional algorithmic methods. In microscopy, the great challenge is to retrieve as much information as possible from the data-packed images, and this is where deep learning has proven to be very effective.

The tool that Midtvedt and his research colleagues have developed involves neural networks learning to retrieve exactly the information that a researcher wants from an image by looking through a huge number of images, known as training data. The tool simplifies the process of producing training data compared with having to do so manually, so that tens of thousands of images can be generated in an hour instead of a hundred in a month.

“This makes it possible to quickly extract more details from microscope images without needing to create a complicated analysis with traditional methods. In addition, the results are reproducible, and customised, specific information can be retrieved for a specific purpose.”

For example, the tool allows the user to decide the size and material characteristics for very small particles and to easily count and classify cells. The researchers have already demonstrated that the tool can be used by industries that need to purify their emissions since they can see in real time whether all unwanted particles have been filtered out.

The researchers are hopeful that in the future the tool can be used to follow infections in a cell and map cellular defence mechanisms, which would open up huge possibilities for new medicines and treatments.

“We have already seen major international interest in the tool. Regardless of the microscopic challenges, researchers can now more easily conduct analyses, make new discoveries, implement ideas and break new ground within their fields.”

Brief history of quantitative microscopy and particle tracking. (a)–(b) 1910–1950: The manual analysis era.

(a) Examples of manually tracked trajectories of colloids in a suspension from Perrin’s experiment that convinced the world of the existence of atoms.1 The time resolution is 30 s.

(b) Kappler manually tracked the rotational Brownian motion of a suspended micromirror to determine the Avogadro number.2

( c)–(e) 1951–2015: The digital microscopy era. (c )Causley and Young developed a computerized microscope to count particles and cells using a flying-spot microscope and an analog analysis circuit.3

(d) Geerts et al. developed an automatized method to track single gold nanoparticles on the membranes of living cells.4

(e) Crocker and Grier kickstarted modern particle tracking, achieving high accuracy using a largely setup-agnostic approach.5

(f)–(i) 2015–2020: The deep-learning-enhanced microscopy era. (f) Ronneberger et al. developed the U-Net, a variation of a convolutional neural network that is particularly suited for image segmentation and has been very successful for biomedical applications.6

(g) Helgadottir et al. developed a software to track particles using convolutional neural networks (DeepTrack 1.0) and demonstrated that it can achieve higher tracking accuracy than traditional algorithmic approaches.7

(j) This article presents DeepTrack 2.0, which provides an integrated environment to design, train and validate deep learning solutions for quantitative digital microscopy. (b) Reprinted with permission from E. Kappler, Ann. Phys. 403, 233–256 (1931). Copyright 1931 John Wiley and Sons.2

(c ) Reprinted with permission from Causley and Young, Nature 176, 453–454 (1955). Copyright 1955 Nature Publishing Group.3

(d) Reprinted with permission from Geerts et al., Biophys. J. 52, 775–782 (1987). Copyright 1987 Elsevier.24

(e) Reprinted with permission from Crocker and Grier, J. Colloid Interface Sci. 179, 298–310 (1996). Copyright 1996 Elsevier.5

(f) Reprinted with permission from Ronneberger et al., Int. Conf. Med. Image Comput. Comput. Assist. Interv. 234–241 (2015). Copyright 2015 Nature Springer.6

Simulation of Stand-to-Sit Biomechanics for Robotic Exoskeletons and Prostheses with Energy Regeneration

by Brokoslaw Laschowski, Reza Sharif Razavian, John McPhee in IEEE Transactions on Medical Robotics and Bionics

Robotics researchers are developing exoskeletons and prosthetic legs capable of thinking and making control decisions on their own using sophisticated artificial intelligence (AI) technology.

The system combines computer vision and deep-learning AI to mimic how able-bodied people walk by seeing their surroundings and adjusting their movements.

“We’re giving robotic exoskeletons vision so they can control themselves,” said Brokoslaw Laschowski, a PhD candidate in systems design engineering who leads a University of Waterloo research project called ExoNet.

Exoskeletons legs operated by motors already exist, but users must manually control them via smartphone applications or joysticks.

“That can be inconvenient and cognitively demanding,” said Laschowski, also a student member of the Waterloo Artificial Intelligence Institute (Waterloo.ai). “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”

To address that limitation, the researchers fitted exoskeleton users with wearable cameras and are now optimizing AI computer software to process the video feed to accurately recognize stairs, doors and other features of the surrounding environment.

The next phase of the ExoNet research project will involve sending instructions to motors so that robotic exoskeletons can climb stairs, avoid obstacles or take other appropriate actions based on analysis of the user’s current movement and the upcoming terrain.

“Our control approach wouldn’t necessarily require human thought,” said Laschowski, who is supervised by engineering professor John McPhee, the Canada Research Chair in Biomechatronic System Dynamics. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons and prosthetic legs that walk for themselves.”

The researchers are also working to improve the energy efficiency of motors for robotic exoskeletons and prostheses by using human motion to self-charge the batteries.

Experimental quantum speed-up in reinforcement learning agents

by V. Saggio, B. E. Asenbeck, A. Hamann, T. Strömberg, P. Schiansky, V. Dunjko, N. Friis, N. C. Harris, M. Hochberg, D. Englund, S. Wölk, H. J. Briegel, P. Walther in Nature

Artificial intelligence is part of our modern life. A crucial question for practical applications is how fast such intelligent machines can learn. An experiment has answered this question, showing that quantum technology enables a speed-up in the learning process. The physicists have achieved this result by using a quantum processor for single photons as a robot.

Robots solving computer games, recognizing human voices, or helping in finding optimal medical treatments: those are only a few astonishing examples of what the field of artificial intelligence has produced in the past years. The ongoing race for better machines has led to the question of how and with what means improvements can be achieved. In parallel, huge recent progress in quantum technologies have confirmed the power of quantum physics, not only for its often peculiar and puzzling theories, but also for real-life applications. Hence, the idea of merging the two fields: on one hand, artificial intelligence with its autonomous machines; on the other hand, quantum physics with its powerful algorithms.

Over the past few years, many scientists have started to investigate how to bridge these two worlds, and to study in what ways quantum mechanics can prove beneficial for learning robots, or vice versa. Several fascinating results have shown, for example, robots deciding faster on their next move, or the design of new quantum experiments using specific learning techniques. Yet, robots were still incapable of learning faster, a key feature in the development of increasingly complex autonomous machines.

Within an international collaboration led by Philip Walther, a team of experimental physicists from the University of Vienna, together with theoreticians from the University of Innsbruck, the Austrian Academy of Sciences, the Leiden University, and the German Aerospace Center, have been successful in experimentally proving for the first time a speed-up in the actual robot’s learning time. The team has made use of single photons, the fundamental particles of light, coupled into an integrated photonic quantum processor, which was designed at the Massachusetts Institute of Technology. This processor was used as a robot and for implementing the learning tasks. Here, the robot would learn to route the single photons to a predefined direction. “The experiment could show that the learning time is significantly reduced compared to the case where no quantum physics is used,” says Valeria Saggio, first author of the publication.

In a nutshell, the experiment can be understood by imagining a robot standing at a crossroad, provided with the task of learning to always take the left turn. The robot learns by obtaining a reward when doing the correct move. Now, if the robot is placed in our usual classical world, then it will try either a left or right turn, and will be rewarded only if the left turn is chosen. In contrast, when the robot exploits quantum technology, the bizarre aspects of quantum physics come into play. The robot can now make use of one of its most famous and peculiar features, the so called superposition principle. This can be intuitively understood by imagining the robot taking the two turns, left and right, at the same time. “This key feature enables the implementation of a quantum search algorithm that reduces the number of trials for learning the correct path. As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” says Hans Briegel, who developed the theoretical ideas on quantum learning agents with his group at the University of Innsbruck.

This experimental demonstration that machine learning can be enhanced by using quantum computing shows promising advantages when combining these two technologies. “We are just at the beginning of understanding the possibilities of quantum artificial intelligence” says Philip Walther, “and thus every new experimental result contributes to the development of this field, which is currently seen as one of the most fertile areas for quantum computing.”

HeadFi: Bringing Intelligence to All Headphones

by Xiaoran Fan, Longfei Shangguan, Siddharth Rupavatharam, Yanyong Zhang, Jie Xiong, Yunfei Ma, Richard Howard in ACM MobiCom ’21 Conference

How do you turn “dumb” headphones into smart ones? Rutgers engineers have invented a cheap and easy way by transforming headphones into sensors that can be plugged into smartphones, identify their users, monitor their heart rates and perform other services.

Their invention, called HeadFi, is based on a small plug-in headphone adapter that turns a regular headphone into a sensing device. Unlike smart headphones, regular headphones lack sensors. HeadFi would allow users to avoid having to buy a new pair of smart headphones with embedded sensors to enjoy sensing features.

“HeadFi could turn hundreds of millions of existing, regular headphones worldwide into intelligent ones with a simple upgrade,” said Xiaoran Fan, a HeadFi primary inventor. He is a recent Rutgers doctoral graduate who completed the research during his final year at the university and now works at Samsung Artificial Intelligence Center.

A peer-reviewed Rutgers-led paper on the invention, which results in “earable intelligence,” will be formally published in October at MobiCom 2021, the top international conference on mobile computing and mobile and wireless networking.

Headphones are among the most popular wearable devices worldwide and they continue to become more intelligent as new functions appear, such as touch-based gesture control, the paper notes. Such functions usually rely on auxiliary sensors, such as accelerometers, gyroscopes and microphones that are available on many smart headphones.

HeadFi turns the two drivers already inside all headphones into a versatile sensor, and it works by connecting headphones to a pairing device, such as a smartphone. It does not require adding auxiliary sensors and avoids changes to headphone hardware or the need to customize headphones, both of which may increase their weight and bulk. By plugging into HeadFi, a converted headphone can perform sensing tasks and play music at the same time.

The engineers conducted experiments with 53 volunteers using 54 pairs of headphones with estimated prices ranging from $2.99 to $15,000. HeadFi can achieve 97.2 percent to 99.5 percent accuracy on user identification, 96.8 percent to 99.2 percent on heart rate monitoring and 97.7 percent to 99.3 percent on gesture recognition.

Morphological and molecular breast cancer profiling through explainable machine learning

by Alexander Binder, Michael Bockmayr, Miriam Hägele, Stephan Wienert, Daniel Heim, Katharina Hellweg, Masaru Ishii, Albrecht Stenzinger, Andreas Hocke, Carsten Denkert, Klaus-Robert Müller, Frederick Klauschen in Nature Machine Intelligence

Researchers at Charité — Universitätsmedizin Berlin and TU Berlin as well as the University of Oslo have developed a new tissue-section analysis system for diagnosing breast cancer based on artificial intelligence (AI). Two further developments make this system unique: For the first time, morphological, molecular and histological data are integrated in a single analysis. Secondly, the system provides a clarification of the AI decision process in the form of heatmaps. Pixel by pixel, these heatmaps show which visual information influenced the AI decision process and to what extent, thus enabling doctors to understand and assess the plausibility of the results of the AI analysis. This represents a decisive and essential step forward for the future regular use of AI systems in hospitals.

Cancer treatment is increasingly concerned with the molecular characterization of tumor tissue samples. Studies are conducted to determine whether and/or how the DNA has changed in the tumor tissue as well as the gene and protein expression in the tissue sample. At the same time, researchers are becoming increasingly aware that cancer progression is closely related to intercellular cross-talk and the interaction of neoplastic cells with the surrounding tissue — including the immune system.

Although microscopic techniques enable biological processes to be studied with high spatial detail, they only permit a limited measurement of molecular markers. These are rather determined using proteins or DNA taken from tissue. As a result, spatial detail is not possible and the relationship between these markers and the microscopic structures is typically unclear. “We know that in the case of breast cancer, the number of immigrated immune cells, known as lymphocytes, in tumor tissue has an influence on the patient’s prognosis. There are also discussions as to whether this number has a predictive value — in other words if it enables us to say how effective a particular therapy is,” says Prof. Dr. Frederick Klauschen of Charité’s Institute of Pathology.

“The problem we have is the following: We have good and reliable molecular data and we have good histological data with high spatial detail. What we don’t have as yet is the decisive link between imaging data and high-dimensional molecular data,” adds Prof. Dr. Klaus-Robert Müller, professor of machine learning at TU Berlin. Both researchers have been working together for a number of years now at the national AI center of excellence the Berlin Institute for the Foundations of Learning and Data (BIFOLD) located at TU Berlin.

It is precisely this symbiosis which the newly published approach makes possible. “Our system facilitates the detection of pathological alterations in microscopic images. Parallel to this, we are able to provide precise heatmap visualizations showing which pixel in the microscopic image contributed to the diagnostic algorithm and to what extent,” explains Prof. Müller. The research team has also succeeded in significantly further developing this process: “Our analysis system has been trained using machine learning processes so that it can also predict various molecular characteristics, including the condition of the DNA, the gene expression as well as the protein expression in specific areas of the tissue, on the basis of the histological images.

Next on the agenda are certification and further clinical validations — including tests in tumor routine diagnostics. However, Prof. Klauschen is already convinced of the value of the research: “The methods we have developed will make it possible in the future to make histopathological tumor diagnostics more precise, more standardized and qualitatively better.”

Top row: morphological feature training database containing manually annotated cell types (breast carcinoma cells, stromal cells, lymphocytes, normal glands and so on) in different data modalities (brightfield and fluorescence). Subsequent classification not only allows for the prediction of cell types but also their precise spatial localization (heatmaps under ‘prediction’ indicate pixel-wise scores showing the likelihood of the presence of the respective cell types increasing from blue to red: red spots indicate cancer cells or lymphocytes). Bottom row: histological cancer images from TCGA in combination with molecular profiling data can be used to train and predict molecular features from morphological image data and, moreover, to identify spatial regions (cancer cell, stroma, TiLs) associated with the prediction of the molecular features. Integration/interpretation: merging spatial predictions allows for a correlation of molecular and morphological features.

Videos

Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.

DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year.

Ayato Kanada at Kyushu University wrote in to share this clever “dislocatable joint,” a way of combining continuum and rigid robots.

The DodgeDrone challenge revisits the popular dodgeball game in the context of autonomous drones. Specifically, participants will have to code navigation policies to fly drones between waypoints while avoiding dynamic obstacles. Drones are fast but fragile systems: as soon as something hits them, they will crash! Since objects will move towards the drone with different speeds and acceleration, smart algorithms are required to avoid them.

Here are a couple of retro robotics videos, one showing teleoperated humanoids from 2000, and the other showing a robotic guide dog from 1976.’

Upcoming events

RoboSoft 2021 — April 12–16, 2021 — [Online Conference]
ICRA 2021 — May 30–5, 2021 — Xi’an, China
DARPA SubT Finals — September 21–23, 2021 — Louisville, KY, USA
WeRobot 2021 — September 23–25, 2021 — Coral Gables, FL, USA

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store