Are we really discussing the AI era’s perils properly?

Hägar the Horrible
19 min readNov 23, 2017

--

In the last six months or so I have been partly reading, partly studying and mostly skimming a couple of hundred of recent medium.com stories and other articles in different publications. My focal point in those readings was the issue of today’s emerging technologies like AI, ML and robotics. I came to the understanding that authors are broadly divided into two positions, excluding a very interesting yet truly minor group of people. This story is about my reaction to the two mainstream groups of authors.

Position One: New developments, especially AI/ML, are full of perils for the near future of humankind.

Position Two: Ok, these developments may be perilous. Yet… i) The feared devastating consequences will not materialise as soon as the other party expects; because the mentioned technologies are still in their infancy. ii) They also have their upsides; they come with immense advantages for the individual and society.

On a closer look at the authors of miscellaneous texts, it is not difficult to observe that this divide occurs chiefly along the watershed of professions and interest groups. Most social scientists and many technologists are very concerned. They belong to Position One. With the proud exception of Mr. Musk of course, most people who have a direct financial interest in the development and unlimited marketability of the named technologies are enthusiastic members of Position Two.

I find that the empirical arguments of both parties are obviously correct and so, one may expect, I should consider myself a (thoroughly confused) supporter of both groups. Yet I think that their partly common ground of argumentation is heavily flawed. Indeed, in my eyes, this is a debate with two oblique lines of correct observations and opinions. Self-evidently, such a thing is possible if and only if their respective domains of discussion are incomplete; in other words if none of the parties covers all the relevant facts and explains them all.

Yet today, surely all participants of the debate would agree with me, to define a correct domain for a proper and productive discussion of those issues is of utmost importance for the future of humankind. The very first reason, the original urge, behind this story is to propose a meta-discussion, one on the conditions of a complete domain of facts which are significant for a thorough understanding of the emerging digital technologies and their impact on our lives, both individual and social. My method to achieve this goal will be one of criticism.

You may have correctly guessed that this story will be, to a great extent, a boring analytical piece of writing. The good news for the patient reader is that I also plan to deal with the previously mentioned “very interesting yet minor group of people”, as well propose a “toy model” of mine which, if suitably refined and extended, could make a contribution to the solution of the problem to hand, if only a modest one. Yet this will be the subject of a second story.

Obviously, it is in the power of no single person to define and solve so huge, so extended and so intricate a problem that is also of a global nature. Thus, this story and its planned sequel can at most claim to address a couple of overlooked questions significant to the conundrum we are so unexpectedly confronted with today. Therefore, I must humbly ask my prospective reader for a certain moderation when they feel it necessary to pass a harsh judgement on the many errors, lacks and other troubles of the text. Assuming that a large number of medium.com readers are at least as concerned about and as perplexed by the issue as myself, I had to put fast delivery before academic perfection.

Why I support the opinions of both positions.

Position One.

Simply because of the already observable and destined to be ever-lasting perils these technologies are serving us with. A couple of these are:

  • The immense concentration of information gathered in all realms of human lives -communication, finance, health, politics, …- in very few hands;
  • the shocking asymmetry of knowledge injected by this process into society;
  • the omnipresent blinding, deafening and stupefying effect which stems from the suggestive power derived from even the most minuscule recorded details of our lives;
  • the unjust profit made by selling our data back to us and to our suppliers and the huge accumulation of wealth produced by these means;
  • the ever increasing number of lost jobs, first in the realm of lesser human activities and functions, and in the very near future, also in the realm of higher;
  • the ever deepening decay of human morals and behaviour on the digital surfaces spanned by these technologies -e.g. the mindless exhibitionism in social media;
  • The State’s principal power to peek into an almost complete set of citizens’ information; thus, the construction of an ever expanding and deepening surveillance state with nowhere to escape — a condition which will only be furthered by the advent of the Internet of Things;
  • the rise and growing dominance of political cliques with an oppressive and militarist agenda in the state apparatus;
  • already ensuing digital skirmishes between major and not so major political powers and their limitless lust to further those skirmishes to full-fledged battles; …

I am not going to delve into the discussion of individual issues because every reader of this story, I am sure, can prepare a much more accomplished list of the immediate catastrophic developments to be served by the merger of the global digital network with AI and ML tools, etc. Still there is a small issue. I have noticed that my theoretical frame is somehow different than e.g. most of the medium.com authors. For example, development of a digital super intelligence is not so central to my way of thinking for reasons which will be explained later, although such a catastrophic possibility is of course not easy to dismiss. Thus let me give a short break here to discuss this theoretical frame concerning the Position One concerns. I will need the terminology it introduces in other parts of the story to come.

As to many other people, it seems quite indisputable to me that the transformation we are experiencing today is not only comparable to the neolithic revolution and to the expansion of capitalism but a much more pervasive and much more drastic one.

My first reason in thinking so is related to the pace with which this transformation proceeds. Our transition to settlement, agriculture and husbandry is a 12,000 year old story which is not completely finished even today. Capitalism is a comparably new story, then again it took it at least 300 plus years to cover the world. We had ample time to heuristically deal with the consequences of these two revolutions. Yet, the results are there: A considerable narrowing of the food spectrum and an almost complete lack of physical activity for the majority of human beings; loss of 80% of forests in habitable areas, a rapid climate change, unbridgeable differences in the quality of life of different social classes and geographic regions, two World Wars killing about 100 million people put together, atomic bomb, H-bomb, nuclear disasters, … (There are still over 10.000 atomic bombs in the world, each one with a couple of hundred times the destructive force as the ones which completely ruined two medium-sized cities in Japan. And the insane North Korean leader is not just cracking jokes!) But at the end of the day, the neolithic revolution and capitalism are two historic evils we at least partly learned to deal with. Solely because we had time to experiment with their peculiarities by trial and error and develop methods of some effect to protect ourselves from their destructive properties.

Yet… observe that the Internet and the GSM net covered three quarters of the human demography in less than two decades. Each and every bullet on my list of evils, and arguably those of the Position One members, is introduced and/or has matured just in the last decades or two. Hence, one would be more than justified to ask what new evils will be introduced and nurtured in the next five or ten years by the notorious surge in capital flow we have been experiencing in the last two or three years in the research and applications of technologies like AI, ML and blockchain. Just like the Position One members I ask myself how much these technologies will accelerate the global evil, when already everything is happening way too fast and the gadgets in our unexperienced hands are way too dodgy to experiment with.

My second reason is about the new and notorious qualities this transformation process puts into the global agenda. Take the fierce competition between the human memory and human mind and two small subsets thereof, namely artificial memory and artificial intelligence. In the context of this competition, job losses are nothing but the tip of a giant iceberg. The State’s ever expanding proactive surveillance apparatus, Facebook’s ever deepening manipulations, Google’s meddling with our most intimate data, … all these much complained about phenomena also are definitely nothing but tips of other nearby icebergs. In the depths of those icebergs very probably lie a redefinition and severe reduction of human’s confidence in their memory and intellect; a solipsistic mind-set of the lonely individual nurtured by personally customised addictive games and similar entertainment tools deliberately designed for that purpose; a thoroughly reactionary and repressive reconstruction of the relationship between a now omniscient State and all-ignorant citizen; a new caste society with its data priests and handout pariahs; …

My third concern, also vehemently stressed by many Position One authors, the immense quantity of data which has begun to be collected in the last ten or twenty years will never be deleted, whatever the information aggregators promise. Neither will a technical catastrophe destroy all its copies. We will all die, so will our children and grandchildren; yet our data will survive us. It will continue to grow in an ever more accelerating manner to inflict further evil on the coming generations. Bad weeds grow tall! Thus each human being will live within the tight boundaries, immense pressure and dark shadow of the collection of information of generations past — an information prison proper.

And last but not least, a very fundamental characteristic of AI/ML is of great concern; namely, that we do not really know why and how it functions and malfunctions. The more functions of production and administration we entrust to this dark box of a technology the more will we live in worrying uncertainty. There are very vivid explanations of this characteristic on medium.com. I refer concerned readers to those articles.

Until now, I have touched on no other issues than the classic Position One concerns. Here, I am definitely with them.

Position Two.

Position Two members are no naive nineteenth century technology fans. Most of them are aware of the Position One arguments and, possibly depending on the degree of their financial or professional involvement in the mentioned technologies, they find these arguments and the resulting fears justified. Yet, their question is, why don’t Position One authors and speakers accept that things also have a sunny side? Why don’t they see the tremendous possibilities opening up with e.g. the advent of AI/ML? These emerging technologies, in connection with our global network context, could surely optimise the operations of the many gadgets in our daily use, the heating equipment in our homes and offices, our vehicles, etc. so that we could very soon observe an incredible reduction in material and energy waste, pollution, etc. Our habitat could be so fine-tuned that the Earth could turn to some kind of Garden of Eden. These technologies also could optimise the way people work in that they oversee production and service procedures and minimise the required human involvement. Furthermore, they could free human kind from mindless, tiring jobs, from driving or carrying parcels to non-critical medical diagnoses or small surgical operations. (Here, some Position Two theoreticians even hope for a “Second Industrial Revolution” to materialise fully very soon.) The tremendous rise in productivity could reduce the cost and prices of many products and services vital to our lives — food, shelter, education, medical services — so substantially that the lower social strata, for the first time in the history, could enjoy the meaning and pride of being a human. We could all be elevated from faceless slaves to honourable Roman citizens at once.

I am in total agreement with these opinions and even want to add to them the observation that the antidote to some very prominent Position One fears, especially to those related to the State’s and big companies’ over-powered position, could be found exactly in those hated technologies. AI/ML in combination with cryptography and distributed peer-to-peer networks could even turn the tables against the State, Google, Facebook and other similar agents of evil. They could provide us common people with a chance to live, buy and sell, organise and vote without anybody peeping into our matters and without having a cause for concern of corruption. An extremely libertarian Roman citizenship indeed, where each person would be in a position to shape their digital persona according to a personally defined level of anonymity.

My summary of Position Two arguments ends here. I do not only accept Position Two hopes as justified observations and opinions, but also defend the point of view that the feasibility of the mentioned technologies is an almost achieved target -especially if we consider the immense pace of scientific, technological and financial developments in the mentioned areas.

Why do I accept neither party’s position as such?

Opinions are one thing, a position is another. Positions are complex sets of interconnected mental and behavioural properties. They may include sentiments, mannerisms, general paradigms, personal opinions, ideologies, and many more things in an endless array of n-ary relations. In a position, everything is situated under the umbrella of a sense of significance which tries to streamline the totality of the contents of that position. This sense of significance is mostly a hidden factor, sometimes hidden even from the holder of that position, and very frequently from others. Positions are always postures to be demonstrated to others. They are there to solve common problems of a smaller or bigger community -or to let those problems be, which also requires considerable effort. Last but not least, positions can only be understood by observing their complementary sets i.e. what they ignore. This is indicative because whatever they ignore, they certainly do so deliberately and with a very high degree of absoluteness; and, most of the time, this is where their pathology is to be detected and where a productive analysis of that position should begin.

The rest of this section will focus on the respective pathologies of Position One and Position Two, in reverse order. Yet, before the actual criticism, I have to make a second short stop to cover how I see the general context in which our puzzle is embedded.

A couple of notes on the general context of our problematics.

Seventeen years into the twenty-first century we live in a world where capital and the State blossom in the truest meaning of the word. Capital is abundant — so abundant indeed that this condition may constitute its biggest problem- and can move everywhere with a historically unparalleled ease and speed. Older states have recovered from the popular — democratic attacks of the nineteenth and early twentieth centuries and newer ones never seriously experienced any. Today, states operate completely as it serves their respective purposes whatever they may be. In every country, parliaments and other organs of public representation, if there are any, are slowly degraded to public relations offices of the State. This is especially true of the last five decades. Militarism is at its zenith. No hopes for an overarching international institution is in the horizon for human kind.

Mathematics, shining like an eternal and ever brighter arctic sun since at least the eighteenth century, with its never ending and unusually productive discussions about its own foundations, has made not only computers possible but also laid down a very detailed blueprint for large and very important software domains like encryption and distributed computing. Natural sciences, the equally bright antarctic sun, first with its thorough grasp of electromagnetic phenomena in the second half of the nineteenth century and then, with its huge “quantum leap” into subatomic phenomena in the turn of the penultimate century, changed everything: We can say that science brought the world truly together and gave a meaning to the notion of global simultaneity for the first time in human history by making phone, radio and TV technologies possible. And by supplying mathematics’ foundational discussions with a solid material domain in the form of computer hardware, science externalised and automatised parts of human mind for the first time. This eternal couple, hand to hand, made the world of today as we know it. There is no foreseeable end to this happy marriage.

It is little more than a truism that all technological innovations are legitimate children of this couple. Another truism is that their foster parents are capitalism and the State. It is capitalism and the State who have, in the truest meaning of the word, sponsored all the technological inventions we know today and helped them to globally disperse. Without these benevolent foster parents we would have neither the radio, nor the Internet. Yet we often forget to observe the strict conditionality of this benevolence: Innovations in math and science are vehemently tasked either to effect a provable rise in the process of capital accumulation or a rise in the State’s military power and internal social dominance. Of course not all mathematical or scientific progress has a readily apparent technological use and among those that have, probably only a minuscule portion has the potential of those desired effects. These are innovations known only to a couple of specialists. This state of affairs, this strict conditionality in capital’s and the State’s support, though simple and well known, will be repeatedly exploited first in my criticism of the two positions; secondly to shape and propose my “model” which in the following story.

A very simple observation, yet neither clearly stated nor sufficiently refined in this debate by either position, is the following: Our principal source of concern is the marriage of AI/ML with the global network with its vast and ever growing data, not the AI/ML technologies as such. Indeed, nobody would be too concerned with these highly abstract mathematical instruments if they were just applied on, say, astrophysical data. Furthermore, the problem was already there and already significant enough before these instruments came to be more central and fashionable. A quick glance into the Position One concerns in the previous section will convince one of this obvious fact. Thus, an appropriate definition of the domain of discussion must contain a certain isolation of the problems originating from the emergence of the global network as we know it and the exponentiation of the these problems by the power of AI/ML technologies.

Position Two.

Let us begin with Position Two arguments which in essence sum up to that certain optimism that the feared technological innovations have their sunny sides and examine them in perspective, this means, in the context I have sketched above in a very informal fashion.

As said before, Position One and Position Two arguments form two opinion sets which tackle different questions and hence do not cover the material of the other party. This is a symmetric case of incompleteness, yet one with an asymmetric distribution of blame.

Because… considering the severity of Position One arguments and the bleak picture painted by them, by ignoring these arguments (or easily dismissing them claiming they were too rushed and raising promptly the sunny side argument), Position Two reduces their own arguments principally to products of a funny yet quite hypocritical wishful thinking. The same is not the case for the other party.

When Position One people talk about their fear e.g. of a very near future of total state surveillance, where each and every second of each and every person will be recorded with its uncountably many relevant and irrelevant dimensions, it will not constitute an appropriate answer to say “And you know, this has its sunny side too! Because it will help the nearby hospitals to immediately send an ambulance to where you are, in case you have a heart attack.” This kind of argumentation is not even tangent to the point of discussion and hence irrelevant. And, this kind of attitude is nothing but a very bad cover of one’s strategic or tactical alignment with the principle of a surveillance state. I am afraid that my last example is not an over-simplified one. At the very core, Position Two is exactly as theoretically weak and ethically treacherous as my example demonstrates.

Position Two arguments have another very strange common internal property: They are cheerfully abstract. Almost never do they cross the line dividing a weed-induced phantasy of a bright future and the realm of concrete questions beginning with who, what, why. Even in case of rare innovations, say of AI, which probably cannot be a point of concern for Position One either, you really almost never find a solid Position Two narrative of how their much praised innovations will serve the common cause of humanity. For example, we all know that a good deal of AI is already present in e.g. radiological equipment like CT and MR and further refinements in that direction will bring more comfort and speed in the treatment of lung cancer. That’s great. Yet, will this refinement also pay the bills of lower class patients who are statistically much more prone to lung cancer than the ones with higher income? How will it serve “humanity”? No answer. The only thing we know for sure is that better radiological equipment has a huge market potential and will serve to further capital accumulation irrespective of by whoever and for whomever they will be used.

The truth is there is no such thing as a technology which is beneficial to the totality of human kind in all possible social models. (Beware, the opposite proposition may not be true, take the example of the H-bomb.) Let me repeat that, in our now truly global world, every method, innovation, technology is determined to serve humanity and the world only as a by-product and only within the confines of accelerated capital accumulation or extended power of the State. Position Two membership, mostly comprised of savvy technologists, investors and or bosses of technology, of course are very well acquainted with this simple fact. And of course, they already have fixed answers to my above series of questions. The market for their products consists almost solely of very big buyers. These are chiefly companies who are in need of new AI/ML tools to churn out relevant and further marketable information products from huge chunks of human data collected, to a great extent illegally, from the global network. Or, their buyer is directly a state who will, again illegally, use them to further consolidate its already unacceptably expanded and one-sided power over society and to undermine enemy states. Yes, there is neither a considerable small-scale market for AI/ML; nor any personal buyers. That is why, most Position Two members are in a fierce competition to sell their products to NSA, CIA and the secret services of any oppressive state which has a demand for them; or at best, to Google and Facebook. Assuming of course, that they are not already funded by one of these dark players.

In my opinion, all this renders Position Two arguments to cheap advertorials without license of some technology company or other interest group and are theoretically utterly worthless.

Position One.

As said before, Position One is a tad more enigmatic to me. Of course I share all their fears and more. I cannot even imagine a person who wouldn’t do so provided that they are aware of what is going on in today’s world to a minimal degree. You need nothing more than having watched TV news about all those leaks to convince yourself at least to take their fears seriously, not to mention Channel 4’s Black Mirror series.

No, my problem with the defenders of this position is something else. My point boils down to this single question: How can they satisfy themselves with this endless fear-spreading-business of theirs and not try and find a solution? Indeed, many Position One members are very influential scientists and technologists. They see what is going on in the industry from very very close range; are in possession of all the technical and theoretical instruments to analyse it; live in close contact with the main players of the situation so that they can confer some of the most brilliant minds; … How come, they cannot imagine any kind of effective measures to propose to the public? It is quite unintelligible that they cannot come together with other clean forces to concoct a plan to help us out of this helpless situation? And if they have tried but could not come up with a positive proposal, how can they go on doing what they are doing today? Really, how can they live in this Götterdämmerung atmosphere? Is it their irresistible curiosity about Jörmungandr that binds them to this industry?

The only explanation I can come up with is that they also are victims of their limitless belief in the omnipotence of that notorious pair called capital and the State. Taking this belief of theirs and its absoluteness as given, I speculate that the standard Position One inference goes like this: “Anything that can be done against the evil, can be done by means of capital and/or the State. It is impossible to move capital against the misuse of the new technologies because their commercial agenda dictates exactly the opposite; whether it be under the pressure of competition or this or that. Then the only instance we can force to take action against the evil is the State — in form of laws and law enforcement. Yet, states are severely tied to capital as we all know. Then we have to go the indirect way which leads from public awareness to party programmes and then to parliamentary action. Thus our very task is to awaken the public consciousness.”

If this is really the explanation of the inner workings of Position One minds then one cannot but conclude that they must be utterly naive. This antiquated belief in bourgeois democracy -construction of state policies through social awareness about common affairs and civil action in compliance with that awareness- was doomed to effectual oblivion long before the Position One authors were even born. In such important matters, especially if the State’s own interests are at stake, no public propaganda will be helpful in shaping the state’s policies. Unlimited surveillance in the global network with all possible means has to be and is the rigid agenda of the states, small and big. The awe-inspiring vastness of the open data produced in this network demands an extraordinary intelligence power and this can only be obtained with AI/ML and related methods. Capital and the State are together in this business and nobody can even question their unholy crusade against the remnants of civil rights and right of ownership of personal information. Sorry folks, nice try but…

Conclusion.

Here we have two perspectives -one of them trumpeting into our ears that doomsday is imminent; the other shyly accepting doomsday’s possibility, only to forget it again very soon and simultaneously begin telling fairy tales. In my opinion, they are agreeing on a rigid self-censorship when it comes to asking and answering very simple questions otherwise implicit and central to their respective positions. Position Two authors put their hope in the ominous couple called capital and the State to foster their technological children; but, bluntly avoid to mention the obvious catastrophic consequences of an education in such a criminal home. Position One authors dedicate their criticism exactly to those consequences but they avoid the question of what their, and our, responsibilities should be in fostering those children, once they are born.

Now, for Position Two people this may be explained by a common professional myopia, the logic of stupid alliances of short-term-interests, the blind dialectics of business competition, that sort of all too human folly which killed the sons, raped the daughters and destroyed the cities of very rich and influential people in the two World Wars. They know exactly that they are doing; yet they are bound to do it. But, when it comes to Position One people, I do not have a clear-cut answer. I really cannot come up with a psychological explanation of their staunch ignorance because many of them seem to be too seriously concerned to satisfy themselves with a dixi et salvavi animam meam stance. So be it! At the end of the they we are all doomed to live with unanswered questions.

This closes my reflections on the two readily observable mainstream positions with respect to the emerging AI/ML technologies. In a sequel to this story I am planning to propose and detail an alternative model to tackle those very same questions from my and a couple congenial authors’ point of view - you may say, from a third perspective.

To be contiued.

--

--