In 1997, the chess computer Deep Blue beat then-reigning World chess champion Garry Kasparov in a chess match under regular time controls. This was, of course, a great victory for Deep Blue, but also a big milestone in the progress of Artificial Intelligence, right? Well, apparently, not for everybody. Some people argued that Deep Blue wasn’t really intelligent, since “all it did” was using brute force to determine good moves. To me, it simply seems that whoever or whatever plays the best game of chess (with the least amount of resources) is the most intelligent at playing chess.
You might argue that this is simply a problem of definition, but the problem is bigger than this. It seems that whenever a new milestone in AI has been reached, critics say that the problem of interest does not require intelligence after all. This problem has been summarized as Tesler’s Theorem:
AI is whatever hasn’t been done yet.
When people understand how a computer solves a specific problem, the process loses its magic and doesn’t seem intelligent anymore, no matter how sophisticated the algorithm might be. Intelligence, to them, feels like something mystical. Of course, once the algorithm is explained, it loses its mysticality and therefore, according to the critics, cannot be intelligent anymore. I suspect that, when scientists discover how the human brain displays general intelligence, critics, who are given this information but told that the information is about some computer instead of the human brain, will say that that’s not real intelligence, either. The human brain solves the problem of chess differently than Deep Blue did, but I’m sure the basic workings of how the brain does this will appear equally non-magical.
So, why is this a problem? Am I so proud of the field of AI that I hate it when people are bashing on it? A little. But no, that’s not my main concern. If you redefine AI over and over again you’ll have a hard time seeing the progress AI has made over the past decades. More importantly, you’ll have a hard time estimating the progress the field will make in the future. If you believe that human brains have something magical that completely sets them apart from computers, instead of believing that what AI does now simply needs progress to meet human standards, you might think that AI will never be as intelligent as humans. Therefore, you might also fail to see the possible dangers of future AI’s. Explaining the exact nature of these dangers is beyond the scope of this article. Suffice it to say that these dangers, including human extinction, have been warned of by important thinkers, including Elon Musk, Stephen Hawking and Max Tegmark.
Artificial Intelligence might very well be the biggest threat to humanity’s survival. Let’s think clearly about it.