The Big AI Hype — Lessons to be learnt from the Past
Many people believe AI (Artificial Intelligence) to be a relatively new thing bound to change the entire world for the better or worse. Well, sometimes it is a good idea to look back in history in order to predict the future in a more relaxed way.
The first AI hype — let us call it “naïve Euphoria” — took place in the 1960s when the Cold War was on and the US military had almost unlimited funds to explore new technologies. I found this hilarious article in the New York Times from 1960 quoting the US Navy of having revealed “the embryo of a computer that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence”.
An important field of AI research was machine translation. The Americans wanted to better understand what the Russians were up to and compared to flying to the moon this surely looked like an easy exercise. Unfortunately it wasn’t. By the end of the 1960s the results of machine translation were still devastatingly bad as illustrated by the famous faulty translation of a Bible quotation from English into Russian and back:
After these sobering results the US military cut most spending and as a result AI research went into a deep sleep.
The second AI hype started in the 1980s. This time it was triggered by the Japanese government. Their then mighty MITI (Ministry of Internationl Trade and Industry) had launched its ambitious 5th Generation Program targeting strategic technology fields. One of them was software, an area where the Americans still held a dominant position. The objective was to overcome the limitations of then prevailing software languages and develop truly “intelligent” applications with a new approach enabled by new programming languages such as PROLOG. It was the era when “knowledge-based systems” became the hot topic. The approach seemed promising indeed. The idea was to extract the human knowledge and feed it into a computer. This was done by coding plenty of “if then” rules into the knowledge base.
Large expert systems were successfully applied in various fields such as medicine, earthquake prediction, oil exploration and computational linguistics. A new generation of machine translation software used deep linguistic analysis to translate human language. Despite impressive results, however, expert systems never succeeded on a broad scale. The main reason was cost: it required too many and too expensive human experts to build such knowledge-based systems. As a consequence, the second AI hype also went into deep sleep.
The third AI hype started around 2010. This time a radically different approach is applied. Instead of trying to extract knowledge from human experts, the system is designed to learn by itself by analyzing tons of data. The magic word for this data-based approach is DNN (Deep Neural Networks). To train such a neural network requires in addition to data also access to enormous computing power.
Image recognition was one of the first fields where neural networks could show their supremacy over traditional methods of pattern recognition. We all have witnessed massive improvements since 2012 when searching for images on the web. Meanwhile AI-based image classification has also found its way into medical diagnosis. Last year, Stanford University developed a skin cancer classification system trained on 130,000 skin lesion images. The neural network achieved performance on par with human experts, i.e. a reference group of 21 board-certified dermatologists.
Meanwhile, AI has conquered many new fields and the list is growing by the week. A recent application is computer-based generation of music. Composing music was long considered more an art than science, requiring rather creative than rational skills. Well, it seems like things are about to change. In the project DeepBach a neural network was trained with 400 chorale sheets written by Johann Sebastian Bach. The results are impressive: The system generates four-part charoles in the style of Bach from scratch. Let me be honest: I was not able to distinguish between human and computer generated music.
So where is the AI journey heading to? It comes to no surprise that there are two rather different predictions. The optimist view is that AI has just started to conquer our world. What we have today would just deserve to be called “weak AI”, i.e. it needs to be trained with large data sets and can only perform narrow tasks. In the future there will be “strong AI” which is able to learn by itself and can perform general tasks. Some researchers predict that such a system with strong AI will even be conscious of itself. Well, that is almost identical to what the US Navy had been claiming in 1960 (see NY Times article above) — and they failed miserably.
The more cautious view is that the current AI hype will suffer a similar fate as the ones before. Enormous expectations have built up which quite likely will lead to equally high disappointment. This is not to say that AI will not be a success story in the long run. But for the near future it could be a good idea to get prepared for what we have seen in the past: AI taking another deep sleep.