Artificial Intelligence as Civilized Man’s Deadly Sin

Здесь и далее рисунки Елизаветы Вершининой

Report at the panel session: Artificial intelligence and geopolitics
7th World Peace Forum (WPF), July 14–15, 2018, at Tsinghua University in Beijing, China.

A. N. Kozyrev

Artificial intelligence (AI) has become an important factor in geopolitics due to mainly two important circumstances. First, now more than ever before there is a real opportunity to use AI as a weapon in the military sphere. Problems of detecting noise produced by the submarines’ screws and other engineering tasks that are in fact of military nature have been solved before. But today we are talking about creating robots that will kill people. Second, investments in civil applications are growing rapidly, primarily private investments in the development of commercial AI applications. Trends in the AI development and some optimistic expectations are considered in the report “Top Trends to Watch in 2018” that was prepared by CD Insights.

However, the situation is still not quite clear. The impact of AI investments on economic growth is more like negative at present. There is a mismatch between expectations and statistics. This is the main conclusion of the National Bureau of Economic Research report entitled “Artificial intelligence and the modern paradox of productivity: meeting expectations and statistics” (NBER, 2017).

Aggregate labor productivity growth in the U.S. averaged only 1.3% per year from 2005 to 2016, less than half of the 2.8% annual growth rate sustained from 1995 to 2004. Fully 28 of 29 other countries for which the OECD has compiled productivity growth data saw similar decelerations. The unweighted average annual labor productivity growth rates across these countries was 2.3% from 1995 to 2004 but only 1.1% from 2005 to 2015.

Erik Brynjolfsson, Daniel Rock, and Chad Syverson 
 Artificial Intelligence and the Modern Productivity Paradox: 
 A Clash of Expectations and Statistics
 NBER Working Paper №24001 November 2017 JEL No. D2,O3,O4

No less frustrating today is the increase in the cost of computing power for deep learning. The study (Amodei & Hermander, 2018) presents figures showing that since 2012 the number of computations used in the largest AI training runs has been growing exponentially with a 3.5-month doubling period (for comparison, Moore’s law had an 18-month doubling period). With this increase in costs, the development of computer technology does not keep up with the needs.

The gap between the real achievements of AI and expectations is even more pronounced and is shown in the publication of two authors (Marcus & Davis, 2018), one of whom is a psychologist and the second is a specialist in information technology. They fun a new Google program.

My report focuses on another important and much more worrying aspect of AI technology development. We are talking about the impact of AI on humans and our civilization in general. The reference to the concept of deadly sins in the title of the report is meant to be not ostentatious but rather is following the tradition laid down by Conrad Lorenz who was the greatest naturalist and Philosopher of the 20th century. The title of his book (Lorenz, 1973, 1974) “Civilized Man’s Eight Deadly Sins” is associated with the biblical text about seven deadly sins. But we are not talking about sins of an individual, we are talking about dangerous trends in policies of developed countries and the development of mankind in general which can lead mankind to death.

Konrad Lorenz about Feedbacks and Homeostasis.

“One structural property common to all more highly integrated systems is that of regulation by so-called feedback cycles or homeostasis”.

“In nature, there are countless regulating cycles. They are so indispensable for the preservation of life that we can scarcely imagine its origin without the simultaneous “invention” of the regulating cycle. Cycles of positive feedback are hardly ever found in nature, or, at most, in a rapidly waxing and just as rapidly waning process such as an avalanche or a prairie fire. These phenomena resemble various pathological disorders of human society”.

Eight deadly sins from Lorentz’s book are united by being all associated with the violation of negative feedbacks in nature and society in a result of purposeful actions of people with good intentions. There’s a discussion about destruction of negative feedbacks that provide homeostasis in a biological system as so mankind is a part of it. But it’s not only about such feedbacks. Negative feedbacks are usually perceived as inconveniences or even evil. The breakup of such ties in each particular case is easy to justify since it is caused either by the need to solve some serious problems, including geopolitical ones, or by considerations of humanity, or by the desire to improve the quality of life in a sense it is understood here and now — at the time of making a specific decision. But each of these feedback breaks has long-term consequences that are not always predictable and perhaps ultimately is fatal for the mankind.

Each of the deadly sins named by Lorentz corresponds to a separate chapter in his book. It has nine chapters in total. The first chapter is devoted to homeostasis, feedbacks and other general issues which are developed in the following eight chapters in connection with one of the sins.

They are listed below in the order chosen by Lorenz so that it is convenient to refer to them later.

Chapter 1. Structural Properties and Functional Disorders of Living Systems; 
 Chapter 2. Overpopulation; 
 Chapter 3. Devastation of the Environment; 
 Chapter 4. Man’s Race Against Himself; 
 Chapter 5. Entropy of Feeling; 
 Chapter 6. Genetic Decay; 
 Chapter 7. The Breaking with Tradition; 
 Chapter 8. Indoctrinability; 
 Chapter 9. Nuclear Weapons

AI is not mentioned clearly among the listed eight sins (chapters 2–9). It is not mentioned in the text of the book too. However, it is easy to see that the AI is clearly related to at least three of the mentioned sins. These are man’s race against himself (Chapter 4), the breaking with tradition (Chapter 7) and, as follows from the publication (RAND, 2018), nuclear weapons (Chapter 9). Deeper analysis shows that there are many more connections. Perhaps, although it is debatable, there are no AI connections with the overpopulation of the Earth (Chapter 2) and the devastation of the environment (Chapter 3). In other cases, the relation is quite clear. Constant communication with bots can affect feelings and ability to feel (Chapter 5), getting rid of routine operations leads to degradation in what is not a routine (Chapter 6). Thus, the constant use of a navigation system while driving a car leads to the inability to orientate on the ground in real environment.

But the most intriguing is the relationship of AI with what is called Indoctrinability in the book of Lorentz. In case of AI, this specific disease of modern science is manifested as vividly as nowhere else.

Most of us — and we must realize this — love our hypotheses. As I once said, it is a very painful but, at the same time, a healthy and rejuvenating daily morning exercise to throw a pet hypothesis overboard. Naturally, our “love” of hypothesis is influenced also by the length of time we have held it.
If he was the discoverer of a new explanatory principle and therefore has many followers, our adherence will be strengthened by the mass effect of an opinion shared by many people (Konrad Lorenz).

Creation of an artificial intelligence superior in its power to human intellect could be a separate deadly sin. Possible problems arising or, more precisely, expected in this regard are actively discussed in some societies and organizations. For example, this question is debated in the publication (Yudkowsky, 2008) and the bibliography to it as well as many later publications. But mostly attention is drawn to real or imaginary successes in the field of neural networks and deep learning.

It should also be considered that all the way back to 1972, when Lorenz wrote his book, the real achievements of AI were too modest to put their destructive power on a par with nuclear weapons and genetic decay. For example, a robot with AI could put a turret of 4 cubes using a “hand” type manipulator.

For the 5th cube intelligence was not enough. But in science fiction works of a considerably earlier period (in the 1950s and 60s), robots were capable not only to serve a man, but also to get out of his control, and even to love risking to burn all the lamps of an electronic “brain” in a love impulse.

In part these sentiments were shared by researchers themselves. Remembers one of soviet researchers: “I remember waiting after spurt 1950–60’s. It seemed, around the corner — in just 3 years! In 5 years! An artificial intelligence of general purpose would have been created, capable of self-awareness, self-set goals, to cooperate with a person on an equal footing. And then it became clear that the researchers are marking time and nothing important is happening.”

Now the situation with the AI application is in many aspects different from what was in the sixties and early seventies of the 20th century. Today AI is a kind of technological aggregator which has absorbed many different directions from data mining to robotics. It attracts the maximum interest of investors and business. And yet the similarity in some of the fundamental problems of AI itself is striking. It’s time to remember about Indoctrinability (Chapter 8) as one of the diseases of modern science and the scientific community. In different countries, this disease had then and has now its own shades. This disease concerns AI researches too.

In the 1960–70s (in the USSR) the subject of AI was surrounded by a special aura which clearly had a kind of decadence. Among the intelligentsia, it was believed that Cybernetics as a science has been “crushed” for political reasons under the obscene label “whore of imperialism”. Was that really the case? The question is more than controversial because significant funds were sent on defense and space-related areas of Cybernetics. Quite peaceful researches were carried out too. However, in the eyes of the general public and the humanitarian intelligentsia, the broadcast of the “word of mouth” about the “whore” explained the reasons for the rather modest success of Cybernetics and, above all, AI in the civil sector, which was in plain sight.

The same situation took place across the ocean but without ideological tint. Representatives of exact sciences saw eclecticism of Cybernetics and discrepancy between achievements and promises and they “smashed” this branch of science.

The famous report of mathematician sir James Lighthill that commissioned by Parliament in 1973 (Lighthill, 1973) led to an almost complete dismantling of AI studies in England. It is noteworthy that the report was discussed in public, the discussion was broadcast by the BBC, and the record is preserved. You can watch it on YouTube. The essence of the report, if it is formulated in one sentence, was that there is no such discipline as AI. All real problems solved by AI can be solved in other disciplines. There are no real achievements in the uniting part of them actually.

Something similar is happening now, but now everything is built around money. The gap between real achievements and fantasies on the topic became a birth trauma. Moreover, it became malignant when science fiction-writers had been replaced by marketers. Only researches that bring quick money have a chance for investment. “A lot of money is invested in systems capable of real-time replacement of votes and mimics of politicians and to calculate the behavior of voters”. But it is not a scientific problem.On closer examination some sensations turn out to be not quite scrupulous presentation of the material. A typical example is that some sources of information presented victory of AlphaZero over Stockfish in a match of 100 games as a world level sensation.

When trying to perform an objective review, it appears that for the sake of “equalization of conditions” databases for openings and endgames were disabled in Stockfish.

But the playing strength of Stockfish was defined by these databases. At the same time, Stockfish used a standard CPU, while AlphaZero used a fundamentally new tensor processor which gave AlphaZero an additional advantage. In other words, here we can talk about success in the development of new processors, not intelligence. In the development of neural networks success, of course, too, but it is not sensational. Again, it is not about universal intelligence, but about a new approach to solving a specific problem.

Here at desire it is possible again to say that the theme of AI was “attacked” by the representatives of the exact sciences. A typical example is a lecture by the famous physicist Freeman Dyson on the human brain (Dyson, 2014). He told a teaching story about the report prepared by his long-standing friend, but by 2014 already late sir James Lighthill. Back then, in 1973, all AI researches (in a broad sense) were clearly divided into three categories, identified in the report as A, B and C. Here categories A and C have clearly defined motives: each has a clearly defined General direction of its intended goals, but these two directions are completely different. In both categories, over a period of twenty-five years (from the Turing article of 1947 “the intellectual mechanism” to the publication of the 1973 report). Some progress has been made, although expectations have often not been met. Category A consisted of purely applied engineering studies such as speech recognition, machine translation, and some other practical problems usually solved by people. Category C contains all that is associated with the cognitive, neuromorphic, brain-like computing. Some promising research work in the field of neuroscience was noted.

During the same period, research was conducted in another category (category B), where the goals and objectives are much more difficult to distinguish, but which is strongly based on ideas from both A and C and Vice versa, to influence them. Studies in category B, if acceptable arguments for this can be agreed, work based on its interdependence with research in categories A and C to ensure the unity and consistency of the entire AI research area. However, progress in this intermediate category B has been even more disappointing, both in relation to the actual work done and in relation to the establishment of good reasons for such work and thus for the establishment of a single discipline covering categories A and C.

Then Freeman Dyson says:

A lot has happened in [those] 40 years in area A. In area A after 40 years of blood, sweat and tears the Dragon programs of automatic speech recognition are finally working well, and automatic translation programs are producing results that are imperfect, but useful. In area C there’s been less progress. And in area B — none at all.

As a result, we conclude that one should not be afraid of the emergence of super-artificial intelligence. Mankind has a good chance to kill itself much quicker by continuing to follow the other eight sinful traditions that are marked by Konrad Lorenz. Therefore, the danger of AI is more indirect, it is associated with the pedaling of the eight traditions. This applies to the possibility of unintentionally provoked nuclear war, and the race of mankind with itself, and the loss of feelings, and the destruction of traditions. But the main danger is Indoctrinability that is the civilized society’s mind’s disease.


Amodei & Hermander (2018) AI and Compute, by Dario Amodei and Danny Hermander

CB Insights (2018), 15 Trends Shaping Tech In 2018

Dyson, F. (2014) Are brains analogue or digital? Lecture at the University College Dublin

Gerbert, Philipp, AI and the ‘Augmentation’ Fallacy May 16, 2018

IBM (2009) The Cat is Out of the Bag: Cortical Simulations with109 Neurons, 1013 Synapses Rajagopal Ananthanarayanan1, Steven K. Esser1 Horst D. Simon2, and Dharmendra S. Modha1

Lorenz, Konrad (1973), Die acht Todsünden der zivilisierten Menschheit. R. Piper & Co. Verlag, München, 1973.

Lorenz, Konrad (1974), Civilized man’s eight deadly sins. “A Helen and Kurt Wolff book”, 1974.

Masrcus & Davis (201), A.I. Is Harder Than You Think by Gary Marcus and Ernest Davis. (Mr. Marcus is a professor of psychology and neural science. Mr. Davis is a professor of computer science. May 18, 2018)

NBER (2017) ARTIFICIAL INTELLIGENCE AND THE MODERN PRODUCTIVITY PARA-DOX: A CLASH OF EXPECTATIONS AND STATISTICS, by Erik Brynjolfsson, Daniel Rock, Chad Syverson, Working Paper 24001 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050[1] Massachusetts Avenue Cambridge, MA 02138, November 2017

RAND (2018) How Might Artificial Intelligence Affect the Risk of Nuclear War? by Edward Geist, Andrew J. Lohn, Perspective EXPERT INSIGHTS ON A TIMELY POLICY ISSUE,

Yudkowsky, Eliezer (2008), Artificial Intelligence as a Positive and Negative Factor in Global Risk Machine Intelligence Research Institute, Edited by Nick Bostrom and Milan M. Ćirković, 308–345. New York: Oxford University Press.