Sorry, FER! Why AI May Not Be the Best Bet for Mental Health Diagnostics yet, but a designerly way is..

Divya Pandey
4 min readApr 28, 2023

--

Photo by Joshua Fuller on Unsplash

So, here’s the scoop: I was super into mental health, like really digging into it. But then, I took a step back and started looking at it from a new perspective — that of human-computer interaction. And let me tell you, it was an eye-opener! Suddenly, I was seeing mental health in a whole new light, with a ton more empathy. And guess what? Technology could totally be the key to cracking this nut!

But are we there yet?

This article explores how future HCI research in AI-based diagnostic technologies can be fueled by viewing the FER model through the eyes of Mentalism and Interactionism.

The impact of mental health on an individual’s overall wellbeing, productivity, and satisfaction with life cannot be overstated. Numerous studies have shown that those with mental disorders display distinct physiological and behavioral markers, including brain activity, eye contact, vocalizations, and facial expressions. These behaviors vary widely among individuals with different mental health issues, making it difficult to diagnose and treat these disorders using traditional psychological scales and assessments alone. However, advancements in machine learning techniques and artificial intelligence offer new opportunities to diagnose and understand the multidimensional psychological symptoms of mental diseases.

Facial Expression Recognition (FER) is one such technique that shows great promise for diagnosing and assessing mental health disorders.

FER is a tech that uses photos and videos to detect emotions by analyzing facial expressions. It combines biology and technology by creating a faceprint through mathematical mapping, rather than just taking a picture of the face. The software then uses deep learning to compare live captures or facial images to the recorded facial data to recognize the same person in the future. Simply put, FER is like a face ID that can read your emotions!

This article delves into the potential biases that could arise from facial recognition technology and how it could be prevented through design thinking.

While new technology can empower individuals by increasing their awareness about themselves, differences in facial features and expressions could result in biases by these un-empathetic machines.

To prevent this from becoming a widespread issue, the article suggests dissecting the underlying systems of facial recognition technology through a designer’s approach. By doing so, we can identify potential future problems and find solutions to prevent the negative consequences that often accompany the use of technology and artificial intelligence. The theory of Mentalism draws parallels between the human mind and modern AI machinery, suggesting that both utilize symbols and mental structures to comprehend meaning. By applying mentalism theories to deconstruct the ways of Artificial Intelligence algorithms, we can develop more intuitive and powerful future AI.

Evaluating a person’s mental state by analyzing their facial expressions is a challenging task, not only for humans but also for artificial intelligence (AI) algorithms. The reason being, AI is susceptible to the biases already present in the world, but on a much larger scale.

The data that algorithms employ is selected by humans, and how those algorithms’ results are applied is also determined by humans. Unfortunately, unconscious biases can easily creep into machine learning models when testing is inadequate and teams are not diverse enough. Consequently, AI systems perpetuate these biased models through automation. In the context of Facial Expression Recognition (FER) systems, the algorithms used can be feature-based, appearance-based, knowledge-based, or template-based. Each of these methods has its advantages and disadvantages. They use dots and distances to make a clear distinction in facial expressions and provide results based on those findings.

An inconsistent standard of equity

Machine learning may lead to inequitable results due to inaccurate conclusions. Biased training data, such as more data on white individuals, can result in facial recognition algorithms that favor one group over another. This can lead to discrimination and oppression for minority groups. The challenge lies in detecting inadvertent biases before they become encoded in the software.

Incorrect interpretation of expressions

Research suggests that the mouth and eyes are crucial in facial expression perception. However, differences in these areas, such as puffiness or frown lines, can lead to incorrect readings by FERs. This can cause mental health issues when individuals are presented with incorrect results, leading them to believe they have a serious issue. Factors like illumination, pose, alignment, and occlusions can create variability in facial images, making expression recognition challenging, according to recent research.

In conclusion

To comprehend FER systems, one must consider their context, contributing factors, and the diversity they encompass. AI-based interventions need to be trusted by both the people who contribute their data and clinicians and policymakers who utilize AI to inform decisions. To establish trust and address algorithmic bias, several actions are necessary. While completely redesigning these algorithms can be costly, HCI practitioners can make interventions by modifying policies and language to have a significant impact on a broader scale. future advancements in

FER technology have the potential to provide individuals with mental health support in their homes, such as mood detection mirrors powered by technologies like Raspberry Pi.

Precise and automated detection systems can facilitate prompt and accurate diagnosis and offer the best resources for assistance. Additionally, examining FER systems through the lens of Mentalism highlights the impact of societal biases on algorithms and the interconnection between AI and the human brain’s reactions to the external world.

--

--

Divya Pandey
Divya Pandey

Written by Divya Pandey

Designer with a multifaceted background in Interactive medium, graphics and textiles starting to venture into the world of UX.