Practical Challenges in AI Development

Isaac
SERRI Technologies
Published in
4 min readJul 3, 2018

As developments in AI advance, we are constantly asking how can we build a truly strong AI? But we should ask further, how should we test such candidates? These are some of the challenges that researchers and scientists are stumbling upon trying to build AI. Since the technology is advancing at such a rapid pace, we see a phenomenon called the ‘AI Effect’ take place. The AI Effect is when a superior algorithm that solves a complicated task is viewed by some as AI. However, this is shortly lived because as technology progresses the algorithm becomes outdated and deemed as merely ‘clever computer science’. A truly strong AI would be immune to the AI Effect and would be able to ‘self-update’ and stay at the forefront of what we consider to be AI at any given time period. Of course, questions arise as to how we effectively test an AI candidate to show that it is indeed immune to the AI Effect. In order to answer that question, we must first explore two well-known tests, the Chinese Room Argument and the Turing Test.

First, let’s start with the Chinese Room Argument, where someone is placed inside a room and given a set of instructions for translating the English text into Chinese.

Despite not knowing Chinese, the person inside the room is able to translate any English phrase into Chinese. To a person outside the room it would appear that the person inside can speak Chinese. This is used to demonstrate the fact that an AI candidate could simulate ‘mental states’ and appear to be behaving (ie. following the rules) in an intelligent manner without achieving actual sentience. The main flaw is that this experiment assumes that the people outside the room can effectively tell the difference between a simulated translation and actual translation which is something that comes up in the Turing Test which we will discuss next.

As many of you are familiar, in 1950 Alan Turing came up with the ‘Turing Test’ which tests if a human can tell whether or not they are interacting with a human or AI via conversation alone. If the person cannot tell them apart then the AI candidate has passed the test and is deemed to exhibit intelligent behavior.

However, there are two main flaws with the Turing Test as a test of intelligence. The first is that the test for intelligence is based on a single skill. Today, there are algorithms that have been specifically written to pass the test but the systems running these algorithms are not intelligent. The algorithms which are written to pass the Turing Test are no different than a chess program that can only perform one predetermined task.

The second flaw is the assumption that there is a distinction between the simulated language and the ‘real’ language. The distinction between simulation and reality can only be made if the intelligent agent (human or AI candidate) can accurately differentiate between the two. If the intelligent agent cannot distinguish between the two, then from the perspective of that agent there is no difference between simulation and reality. What argument can we make that the human wasn’t simulating the machine? Does it matter who simulates who? If multiple people were given the test believed the machine passed the Turing Test, how would we argue that it didn’t?

A true test of AI shouldn’t be based on one task alone rather broken down into smaller subset of tests. A key challenge in developing a general test for a strong AI is that the AI may not be constrained to a humanoid robot, as we see in the movies such as ex-Machina, rather it must be also applied to an algorithm on a computer the size of cellphone. Either way, these subset of tests should include linguistics as well as tests for self-awareness because without these capabilities we cannot declare a candidate to be strong AI. Strong AI should be capable of establishing connections between different sets of data which are not previously programmed into the candidate. It should have the ability to begin and continue to learn, create, and build a knowledge base. Developing proper methods of testing strong AI are still being researched but is without a doubt a challenge.

The information in this article was taken from the book “Dreams of Paradise” by Elliott Zaresky-Williams, our Chief AI Scientist. Be sure to check it out!

--

--