Paradigm
Published in

Paradigm

RT/ Robots sense human touch using camera and shadows

Robotics biweekly vol.24, 28th January — 11th February

TL;DR

  • Researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software.
  • A novel artificial intelligence (AI) approach based on wireless signals could help to reveal our inner emotions, according to new research from Queen Mary University of London.
  • Sidewinders’ bellies are studded with tiny pits and have few, if any, of the tiny spikes found on the bellies of other snakes. The discovery includes a mathematical model linking these distinct structures to function.
  • Researchers have discovered how to make materials that snap and reset themselves, only relying upon energy flow from their environment. The discovery may prove useful for various industries that want to source movement sustainably, from toys to robotics, and is expected to further inform our understanding of how the natural world fuels some types of movement.
  • Researchers have developed the first compact 3D LiDAR imaging system that can match and exceed the performance and accuracy of most advanced, mechanical systems currently used.
  • Researchers have developed and demonstrated for the first time a photonic digital to analog converter without leaving the optical domain. Such novel converters can advance next-generation data processing hardware with high relevance for data centers, 6G networks, artificial intelligence and more.
  • Boston Dynamics’ Spot robot is now armed.
  • PHASA-35, a 35m wingspan solar-electric aircraft successfully completed its maiden flight in Australia, February 2020. Designed to operate unmanned in the stratosphere, above the weather and conventional air traffic, PHASA-35 offers a persistent and affordable alternative to satellites combined with the flexibility of an aircraft, which could be used for a range of valuable applications including forest fire detection and maritime surveillance.
  • The Army Research Lab’s (ARL) Robotics Collaborative Technology Alliance (RCTA), is developing new planning and control algorithms for quadrupedal robots.
  • Engineered Arts’ latest Mesmer entertainment robot is Cleo. It sings, gesticulates, and even does impressions.
  • This week’s UPenn GRASP On Robotics seminar is by Maria Chiara Carrozza from Scuola Superiore Sant’Anna, on “Biorobotics for Personal Assistance — Translational Research and Opportunities for Human-Centered Developments.”
  • Check out robotics upcoming events. And more!

Robotics market

The global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):

Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: Statista

Latest News & Researches

ShadowSense

by Yuhan Hu, Sara Maria Bejarano, Guy Hoffman in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Soft robots may not be in touch with human feelings, but they are getting better at feeling human touch

Cornell University researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software.

The group’s paper, “ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification,” published in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. The paper’s lead author is doctoral student, Yuhan Hu.

The new ShadowSense technology is the latest project from the Human-Robot Collaboration and Companionship Lab, led by the paper’s senior author, Guy Hoffman, associate professor in the Sibley School of Mechanical and Aerospace Engineering.

The technology originated as part of an effort to develop inflatable robots that could guide people to safety during emergency evacuations. Such a robot would need to be able to communicate with humans in extreme conditions and environments. Imagine a robot physically leading someone down a noisy, smoke-filled corridor by detecting the pressure of the person’s hand.

Rather than installing a large number of contact sensors — which would add weight and complex wiring to the robot, and would be difficult to embed in a deforming skin — the team took a counterintuitive approach. In order to gauge touch, they looked to sight.

“By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images,” Hu said. “We think there is interesting potential there, because there are lots of social robots that are not able to detect touch gestures.”

The prototype robot consists of a soft inflatable bladder of nylon skin stretched around a cylindrical skeleton, roughly four feet in height, that is mounted on a mobile base. Under the robot’s skin is a USB camera, which connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish between six touch gestures — touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all — with an accuracy of 87.5 to 96%, depending on the lighting.

The robot can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker. And the robot’s skin has the potential to be turned into an interactive screen.

By collecting enough data, a robot could be trained to recognize an even wider vocabulary of interactions, custom-tailored to fit the robot’s task, Hu said.

The robot doesn’t even have to be a robot. ShadowSense technology can be incorporated into other materials, such as balloons, turning them into touch-sensitive devices.

In addition to providing a simple solution to a complicated technical challenge, and making robots more user-friendly to boot, ShadowSense offers a comfort that is increasingly rare in these high-tech times: privacy.

“If the robot can only see you in the form of your shadow, it can detect what you’re doing without taking high fidelity images of your appearance,” Hu said. “That gives you a physical filter and protection, and provides psychological comfort.”

Deep learning framework for subject-independent emotion detection using wireless signals

by Ahsan Noor Khan, Achintha Avin Ihalage, Yihan Ma, Baiyang Liu, Yujie Liu, Yang Hao in PLOS ONE

A novel artificial intelligence (AI) approach based on wireless signals could help to reveal our inner emotions, according to new research from Queen Mary University of London

The study demonstrates the use of radio waves to measure heartrate and breathing signals and predict how someone is feeling even in the absence of any other visual cues, such as facial expressions.

Participants were initially asked to watch a video selected by researchers for its ability to evoke one of four basic emotion types; anger, sadness, joy and pleasure. Whilst the individual was watching the video the researchers then emitted harmless radio signals, like those transmitted from any wireless system including radar or WiFi, towards the individual and measured the signals that bounced back off them. By analysing changes to these signals caused by slight body movements, the researchers were able to reveal ‘hidden’ information about an individual’s heart and breathing rates.

Emotion detection process in which each participant is asked to watch emotion evoking videos on the monitor while being exposed with radio waves.

Previous research has used similar non-invasive or wireless methods of emotion detection, however in these studies data analysis has depended on the use of classical machine learning approaches, whereby an algorithm is used to identify and classify emotional states within the data. For this study the scientists instead employed deep learning techniques, where an artificial neural network learns its own features from time-dependent raw data, and showed that this approach could detect emotions more accurately than traditional machine learning methods.

Achintha Avin Ihalage, a PhD student at Queen Mary, said: “Deep learning allows us to assess data in a similar way to how a human brain would work looking at different layers of information and making connections between them. Most of the published literature that uses machine learning measures emotions in a subject-dependent way, recording a signal from a specific individual and using this to predict their emotion at a later stage.

“With deep learning we’ve shown we can accurately measure emotions in a subject-independent way, where we can look at a whole collection of signals from different individuals and learn from this data and use it to predict the emotion of people outside of our training database.”

Traditionally, emotion detection has relied on the assessment of visible signals such as facial expressions, speech, body gestures or eye movements. However, these methods can be unreliable as they do not effectively capture an individual’s internal emotions and researchers are increasingly looking towards ‘invisible’ signals, such as ECG to understand emotions.

ECG signals detect electrical activity in the heart, providing a link between the nervous system and heart rhythm. To date the measurement of these signals has largely been performed using sensors that are placed on the body, but recently researchers have been looking towards non-invasive approaches that use radio waves, to detect these signals.

Methods to detect human emotions are often used by researchers involved in psychological or neuroscientific studies but it is thought that these approaches could also have wider implications for the management of health and wellbeing.

In the future, the research team plan to work with healthcare professionals and social scientists on public acceptance and ethical concerns around the use of this technology.

Ahsan Noor Khan, a PhD student at Queen Mary and first author of the study, said: “Being able to detect emotions using wireless systems is a topic of increasing interest for researchers as it offers an alternative to bulky sensors and could be directly applicable in future ‘smart’ home and building environments. In this study, we’ve built on existing work using radio waves to detect emotions and show that the use of deep learning techniques can improve the accuracy of our results.”

“We’re now looking to investigate how we could use low-cost existing systems, such as WiFi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment. This type of approach would enable us to classify emotions of people on individual basis while performing routine activities. Moreover, we aim to improve the accuracy of emotion detection in a work environment using advanced deep learning techniques.”

Professor Yang Hao, the project lead added: “This research opens up many opportunities for practical applications, especially in areas such as human/robot interaction and healthcare and emotional wellbeing, which has become increasingly important during the current Covid-19 pandemic.”

Proposed deep neural network architecture for emotion classification

Mechanical diffraction reveals the role of passive dynamics in a slithering snake

by Perrin E. Schiebel, Jennifer M. Rieser, Alex M. Hubbard, Lillian Chen, D. Zeb Rocklin, Daniel I. Goldman in Proceedings of the National Academy of Sciences

Most snakes get from A to B by bending their bodies into S-shapes and slithering forward headfirst. A few species, however — found in the deserts of North America, Africa and the Middle East — have an odder way of getting around. Known as “sidewinders,” these snakes lead with their mid-sections instead of their heads, slinking sideways across loose sand.

Scientists took a microscopic look at the skin of sidewinders to see if it plays a role in their unique method of movement. They discovered that sidewinders’ bellies are studded with tiny pits and have few, if any, of the tiny spikes found on the bellies of other snakes.

“The specialized locomotion of sidewinders evolved independently in different species in different parts of the world, suggesting that sidewinding is a good solution to a problem,” says Jennifer Rieser, assistant professor of physics at Emory University and a first author of the study. “Understanding how and why this example of convergent evolution works may allow us to adapt it for our own needs, such as building robots that can move in challenging environments.”

Co-authors of the paper include Joseph Mendelson, a herpetologist and the director of research at Zoo Atlanta; evolutionary biologist Jessica Tingle (University of California, Riverside); and physicists Daniel Goldman (Georgia Tech) and co-first author Tai-De Li (City University of New York).

Rieser’s research interests bring together the physics of soft matter — flowable materials like sand — and organismal biology. She studies how animals’ surfaces interact with the flowable materials in their environments to get around. Insights from her research may lead to improvements in human technology.

Snakes, and other limbless locomotors, are particularly interesting to Rieser. “Even though snakes have a relatively simple body plan, they are able to navigate a variety of habitats successfully,” she says. Their long, flexible bodies are inspiring work on “snake” robots for everything from surgical procedures to search-and-rescue missions in collapsed buildings, she adds.

In a previous paper, Rieser and colleagues found that designing robots to move in serpentine ways may help them to avoid catastrophe when they collide with objects in their path.

Sidewinders offered her a chance to dig further into how nature has evolved ways to move across loose sand and other soft matter.

Most snakes tend to keep their bellies largely in contact with the ground as they slide forward, bending their bodies from their heads to their tails. A sidewinder, however, lifts its midsection off the ground, shifting it in a sideways direction.

Previous studies have hypothesized that sidewinding may allow a snake to move better on sandy slopes. “The thought is that sidewinders spread out the forces that their bodies impart to the ground as they move so that they don’t cause a sand dune to avalanche as they move across it,” Rieser explains.

For the current paper, Rieser and her colleagues investigated whether sidewinders’ skin might also play a role in their unique movement style.

They focused on three species of sidewinders, all of them vipers, in residence at zoos: The sidewinder rattlesnake (Crotalus cerastes), found in the deserts of the Southwestern United States and northern Mexico; and the Saharan horned viper (Cerastes cerastes) and the Saharan sand viper (Cerastes vipera), both from the deserts of north Africa.

Skins shed from the sidewinders were collected and scanned with atomic force microscopy, a technique that provides resolution at the atomic level, on the order of fractions of a nanometer. For comparison, they also scanned snake skins shed from non-sidewinders.

As expected, the microscopy revealed tiny, head-to-tail pointing spikes on the skin of the non-sidewinders. Previous research had identified these micro spikes on a variety of other slithering snakes.

The current study, however, found that the skin of sidewinders is different. The two African sidewinders had micro pits on their bellies and no spikes. The skin of the sidewinder rattlesnake was also studded with tiny pits, along with a few, much smaller, spikes — although far fewer spikes than those of the slithering snakes.

The researchers created a mathematical model to test how these different structures affect frictional interactions with a surface. The model showed that head-to-tail pointing spikes enhance the speed and distance of forward undulation but are detrimental to sidewinding.

“You can think about it like the ridges on corduroy material,” Rieser says. “When you run your fingers along corduroy in the same direction as the ridges there is less friction than when you slide your fingers across the ridges.”

The model also showed that the uniform, non-directional structure of the round pits enhanced sidewinding, but was not as efficient as spikes for forward undulation.

The research provides snapshots at different points in time of convergent evolution — when different species independently evolve similar traits as a result of having to adapt to similar environments.

Rieser notes that American sandy deserts are much younger than those in Africa. The Mojave of North America accumulated sand about 20,000 years ago while sandy conditions appeared in the Sahara region at least seven million years ago.

“That may explain why the sidewinder rattlesnake still has a few micro spikes left on its belly,” she says. “It has not had as much time to evolve specialized locomotion for a sandy environment as the two African species, that have already lost all of their spikes.”

Engineers may also want to adapt their robot designs accordingly, Rieser adds. “Depending on what type of surface you need a robot to move on,” she says, “you may want to consider designing its surface to have a particular texture to enhance its movement.”

Stereotyped waveform of a desert snake.

Autonomous snapping and jumping polymer gels

by Yongjin Kim, Jay van den Berg, Alfred J. Crosby in Nature Materials

Imagine a rubber band that was capable of snapping itself many times over, or a small robot that could jump up a set of stairs propelled by nothing more than its own energy. Researchers at the University of Massachusetts Amherst have discovered how to make materials that snap and reset themselves, only relying upon energy flow from their environment. The discovery may prove useful for various industries that want to source movement sustainably, from toys to robotics, and is expected to further inform our understanding of how the natural world fuels some types of movement.

Al Crosby, a professor of polymer science and engineering in the College of Natural Sciences at UMass Amherst, and Yongjin Kim, a graduate student in Crosby’s group, along with visiting student researcher Jay Van den Berg from Delft University of Technology in the Netherlands, uncovered the physics during a mundane experiment that involved watching a gel strip dry. The researchers observed that when the long, elastic gel strip lost internal liquid due to evaporation, the strip moved. Most movements were slow, but every so often, they sped up. These faster movements were snap instabilities that continued to occur as the liquid evaporated further. Additional studies revealed that the shape of the material mattered and that the strips could reset themselves to continue their movements.

“Many plants and animals, especially small ones, use special parts that act like springs and latches to help them move really fast, much faster than animals with muscles alone,” says Crosby, when explaining the study. “Plants like the Venus flytraps are good examples of this kind of movement, as are grasshoppers and trap-jaw ants in the animal world. Snap instabilities are one way that nature combines a spring and a latch and are increasingly used to create fast movements in small robots and other devices, as well as toys like rubber poppers. However, most of these snapping devices need a motor or a human hand to keep moving. With this discovery, there could be various applications that won’t require batteries or motors to fuel movement.”

Kim explains that after learning the essential physics from the drying strips, the team experimented with different shapes to find the ones most likely to react in expected ways and that would move repeatedly without any motors or hands resetting them. The team even showed that the reshaped strips could do work, such as climb a set of stairs on their own.

Crosby continues, “These lessons demonstrate how materials can generate powerful movement by harnessing interactions with their environment, such as through evaporation, and they are important for designing new robots, especially at small sizes where it’s difficult to have motors, batteries, or other energy sources.”

These latest results from Crosby and his group are part of a larger multidisciplinary university research initiative funded by the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and led by Sheila Patek, professor of biology at Duke University, that aims to uncover many similar mechanisms from fast-moving biological organisms and translate them into new engineered devices.

“This work is part of a larger multidisciplinary effort that seeks to understand biological and engineered impulsive systems that will lay the foundations for scalable methods for generating forces for mechanical action and energy storing structures and materials,” says Ralph Anthenien, branch chief, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “The work will have myriad possible future applications in actuation and motive systems for the Army and DoD.”

A universal 3D imaging sensor on a silicon photonics platform

by Christopher Rogers, Alexander Y. Piggott, David J. Thomson, Robert F. Wiser, Ion E. Opris, Steven A. Fortune, Andrew J. Compston, Alexander Gondarenko, Fanfan Meng, Xia Chen, Graham T. Reed, Remus Nicolaescu in Nature

Researchers in Southampton and San Francisco have developed the first compact 3D LiDAR imaging system that can match and exceed the performance and accuracy of most advanced, mechanical systems currently used

3D LiDAR can provide accurate imaging and mapping for many applications; it is the “eyes” for autonomous cars and is used in facial recognition software and by autonomous robots and drones. Accurate imaging is essential for machines to map and interact with the physical world but the size and costs of the technology currently needed has limited LIDAR’s use in commercial applications.

Now a team of researchers from Pointcloud Inc in San Francisco and the University of Southampton’s Optoelectronic Research Centre (ORC) have developed a new, integrated system, which uses silicon photonic components and CMOS electronic circuits in the same microchip. The prototype they have developed would be a low-cost solution and could pave the way to large volume production of low-cost, compact and high-performance 3D imaging cameras for use in robotics, autonomous navigation systems, mapping of building sites to increase safety and in healthcare.

Graham Reed, Professor of Silicon Photonics within the ORC said, “LIDAR has been promising a lot but has not always delivered on its potential in recent years because, although experts have recognised that integrated versions can scale down costs, the necessary performance has not been there. Until now.

“The silicon photonics system we have developed provides much higher accuracy at distance compared to other chip-based LIDAR systems to date, and most mechanical versions, showing that the much sought-after integrated system for LIDAR is viable.”

Remus Nicolaescu, CEO of Pointcloud Inc added, “The combination of high performance and low cost manufacturing, will accelerate existing applications in autonomy and augmented reality, as well as open new directions, such as industrial and consumer digital twin applications requiring high depth accuracy, or preventive healthcare through remote behavioural and vital signs monitoring requiring high velocity accuracy.

“The collaboration with the world class team at the ORC has been instrumental, and greatly accelerated the technology development.”

The latest tests of the prototype, published in the journal Nature, show that it has an accuracy of 3.1 millimetres at a distance of 75 metres.

Amongst the problems faced by previous integrated systems are the difficulties in providing a dense array of pixels that can be easily addressed; this has restricted them to fewer than 20 pixels whereas this new system is the first large-scale 2D coherent detector array consisting of 512 pixels. The research teams are now working to extend the pixels arrays and the beam steering technology to make the system even better suited to real-world applications and further improve performance.

Electronic Bottleneck Suppression in Next‐Generation Networks with Integrated Photonic Digital‐to‐Analog Converters

by Jiawei Meng, Mario Miscuglio, Jonathan K. George, Aydin Babakhani, Volker J. Sorger in Advanced Photonics Research

Researchers at the George Washington University and University of California, Los Angeles, have developed and demonstrated for the first time a photonic digital to analog converter without leaving the optical domain. Such novel converters can advance next-generation data processing hardware with high relevance for data centers, 6G networks, artificial intelligence and more.

Current optical networks, through which most of the world’s data is transmitted, as well as many sensors, require a digital-to-analog conversion, which links digital systems synergistically to analog components.

Using a silicon photonic chip platform, Volker J. Sorger, an associate professor of electrical and computer engineering at GW, and his colleagues have created a digital-to-analog converter that does not require the signal to be converted in the electrical domain, thus showing the potential to satisfy the demand for high data-processing capabilities while acting on optical data, interfacing to digital systems, and performing in a compact footprint, with both short signal delay and low power consumption.

“We found a way to seamlessly bridge the gap that exists between these two worlds, analog and digital,” Sorger said. “This device is a key stepping stone for next-generation data processing hardware.”

A schematic representation of the impact and potential uses of a photonic DAC in a 5 G network. The photonic DAC would be used at the interface between the electronic and photonic platforms both in the information “fog” and in the “cloud” layers, such as in optical information processing, intelligent routing, and label‐date processing or sensor preprocessing at the edge of the network and within servers in the cloud.

Boston Dynamics’ Spot Robot Is Now Armed

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step — the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Now, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

Videos

PHASA-35, a 35m wingspan solar-electric aircraft successfully completed its maiden flight in Australia, February 2020. Designed to operate unmanned in the stratosphere, above the weather and conventional air traffic, PHASA-35 offers a persistent and affordable alternative to satellites combined with the flexibility of an aircraft, which could be used for a range of valuable applications including forest fire detection and maritime surveillance.

Engineered Arts’ latest Mesmer entertainment robot is Cleo. It sings, gesticulates, and even does impressions.

The Army Research Lab’s (ARL) Robotics Collaborative Technology Alliance (RCTA), is developing new planning and control algorithms for quadrupedal robots.

Karen Liu: How robots perceive the physical world. A specialist in computer animation expounds upon her rapidly evolving specialty, known as physics-based simulation, and how it is helping robots become more physically aware of the world around them.

This week’s UPenn GRASP On Robotics seminar is by Maria Chiara Carrozza from Scuola Superiore Sant’Anna, on “Biorobotics for Personal Assistance — Translational Research and Opportunities for Human-Centered Developments.”

Upcoming events

HRI 2021 — March 8–11, 2021 — [Online Conference]
RoboSoft 2021 — April 12–16, 2021 — [Online Conference]
ICRA 2021 — May 30–5, 2021 — Xi’an, China

MISC

Subscribe to Paradigm!

Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.

Main sources

Research articles

Science Robotics

Science Daily

IEEE Spectrum

--

--

Paradigm is an ecosystem that incorporates a venture fund, a research agency and an accelerator focused on crypto, DLT, neuroscience, space technologies, robotics, and biometrics — technologies that combined together will alter how we perceive reality.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store