Artificial Intelligence on Human Testing: Possible Future?

During the past week, I stumbled upon an interesting article that caught my interest. This article was on a game called “Go”.

Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. It is an extremely complex game that requires significant strategy since the number of possible games is vast (10¹⁷⁰ compared to estimated 10¹²⁰ possible in chess).

In the article, an artificial intelligence called “AlphaGo” played Go with a South Korean professional Go player Lee Se-Dol, one of the best players at Go. AlphaGo is a computer program developed by Google DeepMind to play the board game Go. Its algorithm uses a combination of neural networks, machine learning, and Monte Carlo tree search (MCTS) techniques combined with extensive training from human and computer play. Out of five-game match, AlphaGo won outstanding 4 games and became the first computer Go program to beat a professional human Go player without handicaps.

I was astonished by the evolution of computer technology and wondered if this meant getting a step closer to developing an artificial intelligence that can diagnose psychological disorders. This idea made me think of a Disney movie called “Big Hero 6”. It is about an adventure Hiro, 14-year-old protagonist, and Baymax, a robot made as a personal health care companion, go through together. In the movie, Baymax is able to diagnose and treat physical illnesses whether it is internal or external by asking the patient numerous questions and measuring their hormone levels. Then I began to wonder what if this robot is able to diagnose human behaviors or even psychological disorders.

However, it is not easy as physical illnesses since psychological disorders do not always have such explicit measures to make a conclusion. Thus, categorical approach is often used to help with diagnosis.

DSM (Diagnostic and Statistical Manual of Mental Disorders) is a categorical system that helps classifying psychopathological disorder. For example, following criteria has to be met in order to be diagnosed with an antisocial personality disorder (APD) using DSM-5.

As shown above, there are lists of specific symptoms one must meet in order to be diagnosed with the disorder. Since it is very difficult for the clinicians to gather overwhelming information from their patients, this criteria (provided by DSM-5) helps them synthesize the information and narrow down the symptoms. Moreover, because DSM is widely used among clinicians, it helps the communication between other clinicians when discussing possible treatment for the patients.

However, the criteria needed to diagnose a patient continually changes since the scientists are coming up with new findings every day. For example, the diagnosis criteria of APD in DSM-IV are very different from DSM-5.

Due to this difference in criteria, there will be a difference among patients who were diagnosed with different versions of DSM. Furthermore, because is composed of strict and narrow range of criteria, there can be a loss of patient’s specific information. Even worse, there is a possibility that a patient is not diagnosed with a disorder due to a human error made by a doctor who came to a conclusion that the patient did not sufficiently meet the criteria of DSM.

So what if artificial intelligence is able to diagnose psychological disorder? It will not make any mistakes nor be biased like humans are as long as it is carefully programmed. Also, it will be able to store and process enormous amount of information which humans are not able to. Thus, issues pointed out earlier with DSM will not arise. If everyone was assisted with a robot made as a personal health care companion like Baymax, problems such as non-adherence can also be prevented.

However, we should not forget the major flaw. Artificial intelligence is not able to process human emotions. As we learnt from Alan Turing’s test on machine’s ability to exhibit intelligent behavior equivalent to a human, these machines are only doing what they are programmed to do. Artificial intelligence may be intelligent, but it does not indicate that it has a mind, consciousness, or intentionality — simply put “think”. Then we begin to wonder ethical issues. Thus, lots of researches and efforts are needed before we can put it into use in our daily lives.

Like what you read? Give JinHyuk Yang a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.