Diary 2016 (March 10th)- Rudimentary musings on AI
Yesterday Google Deepmind’s Alphago program defeated the World Champion of Go in the first of a series of five matches. Last October, the program was pitted against the reigning European champion and it won 5–0. It was the first ever victory of an AI over a top-ranked Go player on even terms. It is being hailed as a major breakthrough in AI research.
Go is exponentially more complex than chess. It has around 2x10¹⁷⁰ legal positions. The estimated number of atoms in the observable universe is 10⁸⁰. Deep Blue beat Kasparov in 1996, using brute force to calculate six to twenty moves ahead. Here, that is not feasible. The system here doesn’t just learn from existing data or try to calculate all possible moves. It learns by playing against itself, by generating its own data. It utilizes the processing power of GPUs, originally designed for rendering images in graphics intensive applications like games. If applicable in other fields, it could lead to what sounds to me like a more genuine form of ‘learning’.
But what exactly do we mean by learning or thinking? A while ago I was watching Ex-Machina with my wife. She asked if AI is meant to create something which will ‘act’ like a human being with no discernible difference or will it be a man-made human equivalent, in every sense of the word. I didn’t know the answer. AI in science-fiction leans towards the latter, though it would be more difficult to evaluate. When I thought of AI, Deckard, HAL 9000 and Lt. Commander Data came to mind.
The Turing test it turns out sought to answer a very specific question, “Can machines think?” Since there was and is no consensus on how thinking should be defined, Turing came up with an alternate formulation in the form of the imitation game. It would consist of…
…three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B…teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator…We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?
In his paper, he also anticipates 9 possible objections to his postulation of AI, refuting them one by one. The list seems to cover majority of the points brought up in the subsequent 65 years.
There are obvious weaknesses of the Turing test. But Turing was trying to reduce a ‘big’ question to a small, manageable one. The focus is on deception and the results depend on the ability of the interrogator. The last is not a big weakness I feel, as the machine is supposed to act like a human and fool one. So, fooling any sort of human should count.
But the objective of AI research is not to beat the Turing test. For obvious reasons, AI researchers focus on measurable, specific goals. They try to create programs to chat, play games, conduct surveillance, recognize speech and objects. Possibly, that is the only way to approach it, if we want to get tangible results.
I went through the Wiki pages on the Turing Test, the Chinese room thought experiment and AI philosophy and tried to summarize and add to a list of related questions. All these questions are more related to AI philosophy than to AI in practice.
- Can a machine act like a human being well-enough to fool or deceive humans in a dominant number of instances?
- If the machine appears to be acting like a human being, does that mean it is thinking? Is it merely a simulacrum of thinking, if it is following provided code or even if it is building up from there through ‘unsupervised’ learning, as happens in deep learning?
- Where do we set the threshold for human-like thinking? Are individuals with intellectual disabilities somewhat less human?
- Is thinking a logical process of breaking up a big problem and tackling it in steps or parts or a combination thereof?
- Then what about instinct and intuition? Are those also forms of computational thinking? Maybe, they just happen at a more automatic, deeper level, the first based on generations of inbuilt reflexes, hardwired into our genes, the second drawn from unconscious perceptions?
- The two processes mentioned in the previous point can be presented as refutation of the possibility of strong AI. The kind of thinking that is more easily translatable to machine language is considered to be more mechanical. But don’t they make us less ‘human’, as they are kind of less autonomous than (explicitly)reason driven thinking? They can be considered to be programmed in humans, that too simple directives to action or basic algorithmic responses .
- What is the relation between thinking and intelligence? Is intelligence merely the ability to solve complex problems? Is there an emotional component to intelligence?
- If possible, should we add emotions to AI through digital/ mechanical/ chemical means? Will those emotions make the machine more or less human? Will they be then as fallible as human beings or will they be able to acquire some sort of superior control over their emotions, extracting the positives and eliminating the negatives? If they do so, can those be still defined as emotions?
- In fact, do emotions make us more or less human? Is dispassionate intelligence more important? Usually it is implied that love, friendship, appreciation of beauty make us who we are. Or do they merely keep us from achieving our full potential.
- Assume we have a machine which ‘thinks’ for all practical purposes. It also feels, or at least its ‘brain’ feels and its body responds accordingly. Its internal thinking/computational processes are also affected. Does that mean it has consciousness? How do we define consciousness? Or do we assign it the ineffable entity called soul (I am not considering the religious definitions)
- Is it possible to define or quantify consciousness, because it is derived from subjective experiences. Maybe, machines can become sentient or conscious or whatever term we want to use, without us knowing about it. Maybe, their consciousness is so different from ours, that it is impossible for us to grasp.
- Does all the talk of consciousness, self-awareness, imply an innate sense of superiority over other species or something ? Should we also then consider what makes a dog, or a fish, a fish? Or maybe, the earth? Is the earth conscious?
- Then, there are the old sci-fi questions. Should we even attempt to create true strong AI, equivalent to humans in every respect. Do we assign them equal rights as life-forms or do we keep them as slaves, our wishes being their commands? Would the later count as torture? Would we able to rise above our base instincts and do the right thing? If not, will we end up paying the price? Might we pay a high price for doing the ‘right’thing too?
After creating this messy list, I found this nice page, which summarizes current thinking on the topic:
Artificial intelligence (AI) would be the possession of intelligence, or the exercise of thought, by machines such as…www.iep.utm.edu
There is a very interesting mathematical objection to AI, based on Godel’s incompleteness theorems. I get the gist of it but am far from truly understanding it.
Other than that, majority of the objections to AI seem to be variations of the something ineffable in man, which can never be there in a machine. I think it is more a matter of when and not if. But I am not sure if we will recognize it when it comes into existence.
PS: Wired published an article after Alphago won the third match of the series:
At first, Fan Hui thought the move was rather odd. But then he saw its beauty. “It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful.