DIGITAL TRENDSPOTTING 2019:2B — The revolution in the first two of the four parts of artificial intelligence

Rufus Lidman
AIAR
Published in
6 min readJan 27, 2019

After going through the very foundation of AI, that is, the data, below we will begin the review of this year’s development in the field by looking at the actual development of the technology. And the fact is that even AI ​​development, just as our “revolutions” over the centuries, can be specified in four different steps. The first two of these are described in turn below.

Big data

It was when data growth escalated at the dawn of the millennium that big data was invented, using advanced digital technology to process truly large amounts of data. It was initially a matter of manual calculations, and then streamlined via statistical programs such as SAS and SPSS.

But as soon as the explosion of digital data volumes approached terabyte or petabyte — the so-called “petabyte age” (which has now proceeded to the Zettabyte era) it became a matter of dealing with big data which previously involved statistical processing and analysis, and instead came to be called data mining using programs such as Hadoop, etc.

With the help of advanced data processing, we were able to generate both analytical models and prediction models at a completely different level than before, a market that then — at the turn of the current decade — was worth around $100 billion ($170 today).

However, we still used big data for the most part “manually”, i.e. you make “queries” to the computer, and it responds “rather” intelligently.

Artificial Intelligence

It’s not until we start dealing with AI for real, that we actually get onto something that is similar to “real” intelligence, where the system automatically learns to become better at a certain task using data as raw material.

What the artificial intelligence does, compared to the authentic, the human (assuming the human is as skilled in performing statistical analysis at the level required here), is that, through “intelligent machines” instead of humans, is, to put it simply, bringing together two general dimensions.

Partly, being able to process huge amounts of data and the number of variables in this data — in ML even by itself identify, structure and grade new categories/variables of such a gigantic amount of data — at a level that a human being would never be able to handle, not even with such strong computer support as in big data.

The second is that it manages to iterate different types of scenarios with a speed that no human being ever could — either in theory (in scenario analyses where one is testing for probable outcomes in different situations) or in practice (where you adjust different parameters of reality, follow the result and correct the model based on the outcome).

And before we move on, it should for the sake of clarity be noted, as soon as we get to ML and AI considering the concept confusion that often prevails in the AI area, that they are not different things. Artificial Intelligence is the field of technology that came first, and today stands as the overall term for all data processing, analysis, prediction and — sometimes — action, which is not done by a person but automatically by a machine. Machine Learning is, in turn, the type of AI that was later developed to be able to learn from data so it could perform tasks completely by itself (such as Google’s search algorithm and Amazon’s recommendations).

But all this is actually just trivia, for up until now all this has been about a very “narrow AI” (ANI), which is specialized in performing processing, analysis and execution in a specific area. The biggest breakthrough for science, technology and society is the type of AI that deals with proper Deep learning, where techniques such as neural networks, NLP, speech and image recognition just as people are about to teach themselves, and, like people, have even begun to be able to do it to a large extent in different types of areas (AGI).

To put it all in one context, it was, a bit simplified, the first model, AI, which caused Deep Blue to beat Kasparov in 1997, while it was the development of ML that led to Watson becoming a multi-millionaire at Jeopardy in 2011. However, the really big divide that most people within the area know of today was when the world’s foremost AI system, DeepMind, taught its pupil AlphaGo to, in 2016, beat the world champion Leo Sedol at step 37 moves in the world’s most difficult game, Go — known for having as many options as there are atoms in the universe.

And it is only now that we are getting to Deep learning, which has been the real revolution in the past year. First, Deep Mind taught another pupil AlphaZero, who, unlike previous machines who learnt from different people’s games, instead got to learn by playing against herself, tabula rasa. This she did after three days of self-training with Go — good enough to break reigning world champions (i.e. her own brother AlphaGo :-D), which its developers explain with what really takes us towards the next level:

“This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge.”

But that is not enough. Because then AlphaZero played chess for four hours against herself in the break of last year, and did so well enough to beat reigning world champion (also a machine, Stockfish). And this Duracell-bunny did not stop there, instead she played the Japanese game shogi against herself for two hours, and took the opportunity to beat the world’s best shogi program Elmo.

Triple world champion, in three different games, against three other machines (because people, despite our hundreds of years of experience, knowledge and training, no longer stand a chance). And all this in one day.

There are few who refute that it is now that we are talking about “intelligence” for real, what Google’s own researchers call a “Superhuman performance.” And applications that will have a completely insane effect on a huge parts of society, where self-learning systems can be applied to the development of drugs and new materials, as well energy efficiency, etc. Or as the researchers themselves say:

”It’s like an alien civilisation inventing its own mathematics. What we’re seeing here is a model free from human bias and presuppositions. It can learn whatever it determines is optimal, which may indeed be more nuanced than our own conceptions of the same.”

The system has achieved these senseless results by “avoiding” humanity, where this all-new species of the machine has reached a whole new level by finally being completely dehumanized without human error and deficiencies.

At the same time, the great icebreaker will only come when the system is programmed to teach itself (!) and thus lead to AGI (artificial general intelligence) and, over time, ASI (Artificial Super Intelligence). More about this in upcoming sections :-)

· Published sections on social context: Foreword, Section 1A, Section 1B, Section 1C, Section 1D, Section 1E, Section 1F

· Published sections on AI so far: Section 2A, Section 2B

· Digital Trendspotting 2019: Trendspotting is released as a serial at the dawn of the New Year. It will be delivered both in Swedish here on LinkedIn (https://www.linkedin.com/in/onlinestrategy/), in English at Medium (https://medium.com/@blockchainboss), and sometimes with some video abstracts on our block chain journey (https://www.instagram.com/rufus.lidman.blockchainjourney/).

--

--

Rufus Lidman
AIAR
Editor for

Data disruptor with 50,000 followers. 300 lectures, assignments on 4 continents, 6 ventures with 2–3 ok exits, 4 books, 15 million app downloads.