How Do We Test For Strong AI?

Isaac
SERRI Technologies
3 min readJul 3, 2018

--

Often AI is portrayed as being purely logical and lacking the sometimes unnecessary emotions that most humans experience. However, many scientists argue that in order for an AI to be truly intelligent, it would need to understand human emotions to effectively function in human society. There is no doubt that intelligence, emotions, and consciousness all play a roll in what it means to be a human. After all, these are the attributes that researchers are trying to emulate in strong AI. If a strong AI has a concept of self-identity, it would mean we have succeeded in designing an AI capable of independent thought roughly equivalent to that of a human being. But how do we test for that?

A Simple Thought Experiment

Suppose we have two AI subjects that are identical to each other (Subject A & Subject B) and we place them in separate but isolated environments.

Both AI subjects will respond to various stimuli present in their respective environments and will learn and adapt accordingly. After a suitable time passes, the subjects are removed from their respective environments and asked a series of questions related to the stimuli present in the isolated environments. The first set of questions are basic and logical about the sound, light, and temperature, followed by more detailed questions that are less logical and more opinion based such as ‘did they like being in a room by themselves?’ This will allow us to probe deeper into the identity, or lack thereof, of the AI subjects.

After all the questions are answered, the memories of both subjects are wiped clean so that they are returned to the same state they were at before the experiment started. Now, Subject A and Subject B switch environments.

After a suitable time, the subjects will be asked the same set of questions as before. There are two possible scenarios that can arise as a result.

  1. Subject A and Subject B responses match — Being exposed to the same environment and stimulus leads to the same responses.
  2. Subject A and Subject B responses did not match — They each had different experiences, at which point the implications are profound.

The second scenario is extremely interesting because even though they started with a blank slate, they have unique identities. But let’s go further, suppose we had ten strong AI subjects all designed the same way so that there is no way to distinguish between them. Randomly, we choose a subject to enter environment A for a certain amount of time. This is done until all the AI subjects have been in environment A.

After all the subjects have been in the environment, they are asked the same questions to those we asked in the first experiment. If the questions are answered differently then each AI has a unique sense of individuality. But if all of the answers are the same, then the viability of a strong AI is brought into question. This raises serious doubts about the AI truly having a self-identity, therefore lacking the level of consciousness that is on par with humans. Is it possible for all the responses to be the same AND the strong AI to have a self-identity? No, why? Simple. No two humans would answer the questions the same when placed into the same environment. It should be noted that if we had an objective, precise definition of intelligence and consciousness, we could find ways to directly measure this in a controlled setting. This thought experiment was presented to illustrate the challenges in finding a true test for strong AI.

The information in this article was taken from the book “Dreams of Paradise” by Elliott Zaresky-Williams, our Chief AI Scientist. Be sure to check it out!

--

--