Do you know your heart’s favorite color?
AI does. It can now detect your nervous system’s signals as you use your device screen.
New technological developments using artificial intelligence with existing hardware and software can fundamentally alter our use of screen-based technologies. It is now possible to detect the screen users’ nervous systems via a selfie position, web cameras, and machine learning. The next aim is to adapt screen interactions to the users’ autonomic nervous systems’ activity.
Screens and stress
Some recent studies raise alarms about the effects screen time has on developing brains. Further, 81 percent of employees check screen afterhours, overtime and on the weekends, 59% of Americans report feeling stressed, and more than one in ten employees call in sick because of stress. Employee stress has real effects on businesses’ bottom line. For example, employee health benefits cost Starbucks more annually than coffee beans with high job demands. Additionally, stress totals around $46 billion in annual excess health-care costs. The Centers for Disease Control and Prevention reported in 2016 that stress is the leading workplace health problem.
Computer work and visuals affect mental stress levels, performance, emotion and biomarkers such as blood pressure and heart rate variability. It seems as though reducing or limiting the use of screens is not a sustainable solution with devices becoming more adjusted to life situations with foldable phones and hands-free augmented reality. Perhaps, these new technologies can help alleviate the economic and mental burden has on our employee population.
Unchanged design development
One thing has not changed are the screen display choices themselves. The design of the screen experiences remains constant. All coded apps, websites and screen designs are still a top-down approach from developers to users, not taking into consideration the well-being of the users. This is where physiological computing and its application has the potential to transform the way we interact with technological devices.
Physiological Computing is a term used to describe the detection of a human’s nervous system’s activity and its and integration into a technological interface. For instance, one application would be the detection of how fast a screen users’ heart is beating via a computational device and the display thereof on a screen to the user.
It goes beyond biometrics which is the technical term for body measurements and calculations.
Physiological computing can already be used to analyze how you are breathing and how your heart is beating, for a few years now, where the skin tone of the screen users is used to analyze the physiological data.
Physiological computing can use facial recognition as well. Facial recognition technologies have been criticized in the media these days from all political parties alike as inaccurate, invasive and having potentially chilling effects on Americans’ privacy and free expression rights. It has also been in the news for a currently inherent racial bias since the algorithms are trained on the developers, mostly white men. Physiological computing though focuses on the underlying similarity: all human blood is red. It only uses facial recognition as part of the process by narrowing down what on the screen is a human face. After detecting the human face, it focuses on the skin tone to analyze the nervous system of the screen user.
The activity of nervous systems is processed via web and smartphone cameras and machine learning.
The video stream from the cameras is split in images with values of red, green and blue. The data from the green channel delivers data that makes it possible to extrapolate the heart rate. The green channel contains the most valuable information about how fast or slow the blood pulsates underneath the human skin. Invisible to the perception of humans, machine learning compares the green channel of the video stream to infer the human heartbeat. Cardiolens, a recent project by alumnis of the MIT Media Lab, explains the process using a mixed reality approach.
The use of video streams offers a more accurate method as for instance applications using wireless signals such as the MIT Media Lab’s CSAIL device trying to also understand emotions or radar sensor methods such as XeThru who just funded by a $15M round. Wireless and radar signals are active ‘signals.’ They require to actively send a ‘signal’ to the user and receive it back for the analysis. The back and forth takes time, and with movements of the user, the analysis can become less accurate.
Video streams are passive ‘signals.’ The user can move yet the signals for analysis — the skin tone — is being streamed via the camera to the computational device for processing. Milliseconds and movement matter and the method of using passive signals seems most promising for accuracy and almost real-time analysis of the user’s vital signs.
Transparency is key
Civil rights campaigners label facial recognition already as “perhaps the most dangerous surveillance technology ever developed” because of its assumed apparent algorithmic biases, excessive use of law enforcement agencies, and invasion of privacy due to not transparent collaborations between Amazon and government agencies.
With such an ongoing outrage, how will physiological computing be received publicly?
It is now up to the people researching and developing physiological computing to provide transparency about methods and motives.
Physiological computing can easily be used for far worse invasions of user privacy than what occurred with Facebook and Cambridge Analytica. Private data of Facebook users were sold for political purposes to target groups of undecided voters with political ads.
Physiological computing goes beyond the private data and groups of people. It detects information on an individual level, and unbeknown to the user themselves: how the user’s heart’s beats and breaths and how these beats and breaths happen while perceiving a specific screen content.
Individually designed ads could be adapted to each screen user, and learn how to affect the heartbeat or breathing more favorably for the purpose of the ad campaign.
How do you feel about AI knowing more about your heart’s beating than you?
Ads have been used to deliver messages to the audiences for visual content based on concepts. Coca-Cola was described as “a valuable Brain Tonic, and a cure for all nervous affections” and ads for high sugar drinks such as Gatorade are regularly depicted with cooling images of water. The concepts were to use water or descriptions of cure to stimulate the visual cortex of the user’s brain and to activate the parasympathetic nervous system of the user, also called the “rest and digest” system. It slows down the heart rate and is for instance stimulated by water as this is a calming visual for most humans throughout our history. For most.
With the current advertisements, consumers might go shopping and the memory of ads might steer them towards buying a high sugared drink — consciously or unconsciously assuming the product calms.
Physiological computing offers the targeting of ads on a personal level. It provides a much higher accuracy which visuals calm or stress a user’s nervous system. It is being actually calmed by the visuals of water or of fire or something else.
Humans are growing up more and more in urban environments with less and less natural environments embodied as memories. The loading wheel on a screen is reported to cause more stress than a horror movie or intense traffic.
Pioneering physiological computing
At the forefront of the research and development are a few scientists and companies focusing on stress detection and mental wellbeing. Some of these groups are using existing hardware and software for contactless sensing methods. As facial recognition is required for physiological computing using RGB camera-based photo-plethysmography, it raises privacy concerns and illumination issues with the environmental light affecting the analysis.
Thermography is much less affected by those constraints, and UCL researchers Nadia Bianchi-Berthouze and Youngjun Cho have been pioneering physiological computing with thermal imaging cameras in smartphones to detect breathing patterns and recognition of mental stress.
Smartphones and/or RGB camera-based detection are used by multiple companies.
USU researchers Dr. Jake Gunther and Nate Ruben decided to patent their heart rate estimation tool and to start the company Photorithm, Inc. They launched the product “My Smart Beat,” a video baby monitor with breath detection “to identify more than 16 million shades of color to detect movement too small to be seen by the human eye” for the cost of $249.50.
The company Conscious has assembled a team to fundraise for its “platform to elevate human consciousness through AI-driven meditation, therapeutic techniques, and contactless biofeedback” using the smartphone to track breathing and HRV. Breathing.ai is also fundraising with early prototypes and offers patent-based “Adaptive Interfaces” to personalize screen experiences.
Imagine these font and colors changing and adapting to your breathing and heart’s beating.
The next step of physiological computing is to use machine learning to adapt the screen designs to change your breathing and heart rate.
The future is the personalization of screen experiences to the users’ autonomic nervous system. So the screen designs can affect the attention, mood, performance, stress levels and health of the users.
And the personalization could be used for profit of companies, or as a win-win for companies and clients with the integration of mindful biofeedback for stress-reduction and to improve attention.
Serious concerns will be raised and transparency of companies and researchers regarding the methods is essential. The applications which will help users feel better and improve their attention through calming interfaces while using screens might hopefully last.
Imagine the slowing of heart rates and stress reduction with calming interfaces.
Or imagine the increase of stress levels with interfaces designed to attract the users’ desires with personalized ads for the profit of a company.
This is the future of technology being created.
It cannot be an either-or.
The use of this technology affects the unconscious nervous system of screen users.
The ethical and practical implications need to be addressed early — and consciously as a collective.
Humans breathe about 23,000 per day with 12 to 20 breaths per minute, and roughly about 10,000 times in front of a screen with 11 hours of screen time.
Each breath counts.
This is a vast opportunity to calm our collective breathing patterns with calming screens — or to breathe even more stressful with desire-driven screens.
Will the future of technology be breath-taking calming for our hearts?