Motius
Motius
May 2, 2018 · 8 min read

In the 1970s and the 1980s, the tech scene witnessed the so-called “AI winter”. It was a period that came after a phase of hype that was followed by disappointments that resulted in the loss of interest in AI and the cutting of the funds. Although AI today impacts our lives way more than it used to do back then, there are striking similarities between that period and today. AI is there to stay, but we might hit a stagnation point soon.

AI in the 1980s: eternal fetish, cold war and government money

It is difficult to pin down when exactly AI was born. AI with its modern definition is said to have been born in the 1950s. Ever since, it has been carrying big dreams, aspirations and fears. The cold war gave AI a sense of patriotic mission at the beginning; “machine translation” researchers were tasked with developing a system that would automatically translate Russian to English to speed up the processes of intelligence services. That brought in a first wave of significant governmental funding for AI research. It also helped captivate the attention and imagination of the public. In 1954, a prototype that only used six rules to translate some Russian to some English (mostly wrongly) was highly publicized by the New York Times in a frontpage piece. The media was not only appealing to patriotist attitudes of the time, it was also satisfying an eternal fetish of the human mind reignited by the golden age of science fiction in the 1940s. This was not only scientific curiosity, this was deeper and older than that. It was a fertile ground for science fiction to develop several visions for AI and project them into mainstream culture.

This combined with a sense of urgency and superiority in the cold war resulted into overly ambitious statements, maybe the most iconic of which is Marvin Minsky’s in 1970: “from three to eight years we will have a machine with the general intelligence of an average human being.”. Such statements were also aimed at attracting more funding for the research. As Hans Moravec put it: „Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. […] they felt they couldn’t in their next proposal promise less than in the first one, so they promised more“

“More Moore” didn’t deliver in time

It is remarkable that such bold statements did not coincide with any breakthroughs in understanding our own intelligence. Instead, it coincided with the birth of Moore’s law in 1965. It created a belief, or rather a hope, that the “solution of intelligence“ was there and that it is just a matter of computational power and therefore a matter of time. Inevitably, the weight of those high expectations crushed the whole AI sector, which was a billion-dollar industry by the 1980s. Despite advances in computational power, the pioneer projects that drew funding from governments in USA and UK failed to deliver. That culminated into two iconic reports: The Lighthill report in 1973 in Britain in 1973 and the Automatic Language Processing Advisory Committee (ALPAC) report in the US in 1966. Those two reports caused governmental funds in Britain and the US to be cut and the research in AI to be frozen. AI winter had arrived.

To summarize, AI started off with specific applications that were exaggerated by a press appealing to a curious public to create a hype that was utilized by researchers and businesses to attract funding while further exaggerating the expectations, all without any significant breakthroughs in understanding the nature of natural intelligence.

Sounds familiar? It should.

AI was resurrected in the form of Machine Learning in the early 2000s. The prophecy that computational power will eventually bring about intelligence was ultimately proven correct but for the wrong reasons. It wasn’t about the increase in computational power itself, it was more about its spread. Having more computers everywhere meant having more data. Also, having more affordable GPUs meant that more algorithms could run on more devices. That coincided with the rise of huge businesses that were in possession of equally huge amounts of data (Google, Amazon, etc.) and that had pretty good ideas about how to make money out of it. And there was it, the new AI era was born.

Those great ideas were basically product placements or advertisements. With all the consumer data those companies could gather, being able to mine that data and find patterns meant that they could place products and information very precisely for their users. And that obviously became big business and therefore started attracting money to AI again. The same techniques used there were also used for example to find patterns in visual and audio data, creating products around image and voice recognition and most of the other contemporary AI use-cases.

Although those advances and the business built around them are far more sophisticated than the ones from the 1970s, the dynamic around them is strikingly similar. The state and promise of AI seems to be exaggerated again with the consequence of attracting more money into it. Also similar is the missionary rhetoric used, close to the one in and around DARPA in the 1970s, albeit not political this time but rather „sillicon-valley-ish“. “Human-like” and “Super Human AI” fill up conferences, blog posts, white papers, etc. Expectations and fears are sky-high again, and we can’t get enough of it. Our fetish is still going strong. All that with no matching advances in neuroscience and understanding the natural intelligence. The underlying assumption is again a “it-is-just-a-matter-of….“-type one, this time it is not about computational power alone, it is about data.

I don’t think it is. I think there is still a big intellectual leap to be made before we get even remotely close to „human-like“ intelligence. If we fail in doing so before the hype around machine learning and data-mining dies out, we might be entering another AI winter.

How human-like is AI today?

A lot of AI and machine learning revolve around „classifiers“; those are techniques and systems that can classify things by recognizing patterns in big amounts of data; inconveniently big amounts. They could classify you as a suitable recipient of a certain advertisement, classify a product as something suitable for you, classify certain pixels in an image as belonging to a cat (image recognition), classify certain spectrum of sound waves as belonging to a certain word (speech recognition), classify a certain predefined action as being a solution to a predefined problem.

Comparing to the human brain, that „classifying“ capability exists in brain structures as low as primary sensory cortexes if not lower. It is worth emphasizing here that relatively small natural neurological systems, of a rat for example, are currently better than AI systems in the sense that they need less data and they are more versatile in their functions. Higher structures in the brain like the prefrontal cortex are the ones traditionally thought to give humans their „characteristic“ intelligence levels. More precisely, they are responsible for functions like long-term planning, problem solving, creativity, attention, language, etc. (From a neuroscientific perspective, this not quite right, the brain is to be understood as a more holistic organ where the „lower“ structures massively contribute to those higher functions and the existence of other brains to communicate with, but for the sake of this discussion let us ignore that).

Data & Moore cannot do it by themselves

Mimicking those structures is not a matter of quantity of data, of layers of neurons, of processors, etc. Elephants have more neurons in their brains than humans. Certain whales have more neurons in their prefrontal cortexes than humans. We are unlikely to reach those levels with „brute-force“ techniques, it is a matter of design.

Artificial Neural Networks (ANNs) nowadays are initially set to trivial and general designs. It is the training with big amounts of data that sculpts those networks into “classifiers”. Maybe that explains why natural structures need less data, they are not designed trivially. Geoffrey Hinton, one of the pioneers of deep learning, makes the point for “starting over” rather than “more of the same” in AI here.

A very useful feature natural intelligence has and AI still doesn’t have is the following: being able to start out with a few abstract goals and reinterpret them in different contexts, break them down into different levels, recognize problems in those contexts and levels, find solutions for them and execute them; all in real-time without specific algorithms that define the problem on all levels and the possible solution space.

Bunch of Alexas thinking of creating an Ebay would be next-gen AI

As of today, Alexa can generate knowledge about the wants and do-not-wants of its user (classifiers). It could also communicate with other digital or human agents in its environment, including suppliers and other users. It also has the general abstract goal of satisfying its user (presumably). With the skill mentioned above, it would be able to come up with the concept of “making a deal” with those other agents to satisfy its users. Given the abstract goal of satisfying its user and given an understanding of its environment, it would be able to recognize that if it uses the communication channels with other Alexas to coordinate their orders, they could be able to minimize costs and ETA of the merchandise. Or that maybe they could exchange objects that are no longer needed by their respective users for other goods or money (Alexas creating an Ebay..hehe). It would do so without knowing in advance that such solutions are possible, and without knowing that the abstract goal of “satisfying the user’s need” could be reinterpreted in another context as “selling away unneeded goods for money or other goods”.

For us to be able to create such intelligence, it would be very helpful to have breakthroughs in neuroscience. We still need to find out how neural networks in nature do this. We still face major challenges there.

If nothing else, our fetish will keep AI going

Another venue AI could take before a winter comes is the enhancement of the human intelligent itself. Brain-Machine Interfaces have also been resurrected recently. They opened the possibility to (theoretically) read out information from some areas of the brain as well as (again, theoretically) feed information in. If the technical difficulties explained in the linked article are overcome soon, AI would offer a way to extend the human sensory….Oups, there goes that fetish again.

Unlike other technological hypes, AI seems to captivate something very deep inside us. That is its biggest threat; that fetish will always create unrealistic expectations and makes AI destined to disappoint. At the same time, that fetish ensures that AI keeps coming back.


Subscribe for your biweekly dose of tech and our raw opinion on the latest trends: http://tech-dosis.motius.de

Motius.de

We are an R&D company that is specialized in the newest technologies.

Motius

Written by

Motius

Motius is an R&D company specialized in emerging technologies. Check out our publication: https://medium.com/motius-de

Motius.de

Motius.de

We are an R&D company that is specialized in the newest technologies.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade