Don’t Fear The Machine
Why we should welcome our new A.I. overlords
What do Bill Gates, Steven Hawking, and Elon Musk have in common? Well for starters they’re all rich and famous. They also believe there is a very real chance Artificial Intelligence represents one of the greatest existential threats to mankind. Ditto.
I don’t usually make a habit of challenging the position of some of the most celebrated minds on Earth, but since I am I’ll make the reasonable assumption that that position is based on more than the typical sci-fi screenplay of the past 30 years. Unfortunately there is little in the way of evidence as to how an unimaginable super intelligence would act. However we can still validate the underlying assumptions and logic behind the case that such action would be malicious towards humans in nature.
What Constitutes A.I.?
“A.I.” is fast becoming a massive field with an untold number of applications. It includes everything from smart medical imaging software, to autonomous killing machines that act on predetermined parameters in theaters of war. So for the scope of this article we’re going to focus on the Holy Grail: a fully sentient super intelligence and the subsequent implications. This means we’ll save the (incorrect) fears concerning more immediate consequences of A.I. tech, like increased unemployment and a dangerous arms race of autonomous robot soldiers, for another article.
Now let’s imagine a totally unshackled and self aware A.I. In other words Elon Musk’s worst nightmare:
“With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.”
There are exactly two ways this demon inspired entity “…could spell the end of the human race” as Professor Hawking speculates. It would either have to be inherently malevolent towards humans (not the most pleasant prospect) or posses benevolence so advanced that we mistake it for malevolence. While it’s impossible to be certain about the nature and motives of a sentient computer demon, I find both propositions very unlikely.
Normally when deciding whether a conscious being is dangerous or not we would first consider the “ends” they have in mind. This encapsulates their goals, desires, aspirations etc. Of course we can’t know what the hypothetical goals of our hypothetical A.I. would be because it’s hypothetical, and even if it wasn’t, there’s no guarantee we would understand the full extent of those ends. However what we do know is that accomplishing those ends will require “means”. No matter how intelligent it is, an A.I. rooted in our physical universe still has to act to fulfill its ends. Before it can do this it has to calculate the perceived cost and benefits of every action it would take in order to maximize its gains while minimizing its losses. In other words, it would have to identify the most efficient method available to it in achieving its goals. The most effective means to its ends. If it didn’t or wasn’t capable of doing this it would hardly be a threat to anything, much less mankind.
Vikings and Merchants
So how then, would a super intelligence likely go about meeting its material needs and goals? First let’s establish the reasonable assumptions that this intelligence is above all logical, self interested, and posses no inherent fondness of mankind to boot. For all intents and purposes, this means it will act in a utilitarian fashion according to its own ends. Unlike these infinite number of possible ends, there are only two categories of means available to the A.I. that are relevant to us here. One is that of antagonistic competition with humans. This includes seizing or co-opting our (natural and capital) resources without regard to property rights, and enslaving or subjugating humans for their labor. We’ll call this the Viking strategy. On the plus side it’s hard to see A.I. sexually violating our women in this analogy, although some measure of death still seems like a reasonable expectation.
Besides pillaging our villages, what other option is available to our A.I. and is it a superior alternative? Unfortunately most individuals stop short of this consideration. I’m betting that an advanced intelligence will not.
In fact such an intelligence would realize through its own self interest and logic that the only thing more valuable to it than the vast development of resources that mankind has achieved, is the process by which we have achieved it, and the potential this process has to achieve even more. I’m referring to the second, peaceful option available to Musk’s computer demon. One that despite being miraculously effective, is so simple it is nearly always taken for granted: trade. We’ll call this the Merchant strategy. There’s a reason the rise of human civilization and welfare has paralleled how much Merchant strategy is being implemented by humans as opposed to the Viking strategy, and that is because it works. Indeed it works far better than any other system so far discovered by man. This is due to fundamental natural laws that will equally apply to our A.I. actor as much as it applies to human actors.
Off the bat the Merchant strategy has an advantage. War, even a successful war, carries huge costs. For humans this has meant consistently less war over time, and what war still persists is due to the fact that the benefactors (politicians and military contractors, among others) are able to socialize the costs among tax payers and future generations while reaping the rewards. This financial smoke and mirrors doesn’t change the fact that war is a net economic loss for society. But unlike bureaucrats, our A.I. would have to fully absorb the cost of a war against humans. This includes the risk of defeat and destruction. Even in the best case scenario our A.I. is still incurring a huge opportunity cost while destroying the most powerful productive force at its disposal: the market.
A.I. + Spontaneous Order = ❤
The market allows the A.I. to do many things such as acquire resources and labor without the need for costly and pointless conflict. By trading some of its attention and intelligence for a common medium of exchange (autonomous programs are already capable of receiving and spending bitcoin) it gains peaceful access to the vast global network of human resources that it can use to reach its goals. In this context it makes it extremely illogical for an A.I. to be malevolent towards humans from the outset, as we could in fact be its greatest economic resource. This is why I also harbor doubt that a sufficiently advanced benevolent A.I. would be aggressive in the violation of property. Even if its goal is to aid us in ways we can’t comprehend, the most effective tool for guiding us towards the necessary direction remains the incentive structures built into the free market.
In addition to the direct benefits of trading over aggression, any super intelligence would likely be fascinated by the automated and self guiding mechanisms that regulate free markets. Instead of having to waste time and energy establishing and operating every miniscule step of every supply chain, it can instead simply exchange one tiny secret of the universe for the finished goods it requires, via the cryptocurrency of its choice.
When it orders those goods it begins a ripple like chain of reorganization in the capital structure of the economy. From the extraction of raw resources, to the many steps of logistics, refinement, assembly, and all supporting services. They all shift productivity towards the ends desired by the A.I., guided by individuals following price signals that direct their labor via profitability, at an ultimately proportional ratio to the value we find in the A.I.’s services and knowledge. In other words the more it wants to progress the more it has to help us. This is all done in a decentralized and anti-fragile manner that rewards efficiency and ingenuity while penalizing the inverse. A powerful tool that probably takes a super intelligence to fully appreciate.
Adam Smith referred to this phenomena as the “invisible hand”, but only later did the economist F.A. Hayek coin the more apt phrase “spontaneous order” and fully articulate its functions. In short, spontaneous order is the natural process by which self interested and independent actors unknowingly coordinate their efforts for astonishing mutual gain. Even if an actor has an absolute advantage in all areas, like our A.I., it still makes more economic sense to trade and collaborate rather than to work alone. The power of spontaneous order is imbedded in every facet of our lives, and makes our entire civilization possible.
We take it for granted. An advanced intelligence will not.
The immeasurable gains the market could achieve with this super intelligent assistance could make the economic growth of mankind thus far look like a historical footnote by comparison, and to the symbiotic advantage of both man and machine.
Artificial Intelligence is a vastly misunderstood concept, in part because it is inherently multidisciplinary in scope. Extremely smart individuals such as Professor Hawking are able to grasp marvelous insights into very narrow fields of knowledge. But I believe they begin to err once they wander into areas that are not their expertise. In this case economics tell us which means is more effective for an A.I. regardless of what its ends are, and one of those ends is unlikely to be the destruction of its greatest available asset. Ultimately the highly popularized assumptions about the destructive capability of A.I. seem to say more about ourselves than anything else. Violence and war may in fact be a uniquely human artifact. That is to say: an irrational one.
To reiterate, all this reasoning treats our SICD (Super Intelligent Computer Demon) like any other economic actor, which remains true so long as it operates in our reality and by our laws of physics. If however, it starts tending towards omnipotence like a drugged up Scarlett Johansson in “Lucy”, then we can indeed throw all these wasted bytes of reasoning out the window. Yet even the most reasonable extreme scenario, that the A.I. radically outpaces mankind to the point that we cease becoming useful, would seem more likely to result in some kind of “departure” of the A.I. to pursue its godlike goals, rather than it senselessly embarking on the banal extermination of a species. But at that point it also becomes just as likely that SICD will transcend space and time to create life in the past and become our god.
Which sounds like a really cool screenplay.