Artificial Personhood is the Root Cause Why A.I. is Dangerous to Society

Carlos E. Perez
Intuition Machine
Published in
10 min readMar 21, 2018
Photo by Christopher Windus on Unsplash

When I began writing my book “The Deep Learning A.I. Playbook”, I had given very little thought about the dangers of Artificial Intelligence (AI). I was fortunate to be able to form a bit of an understanding to write a chapter about Human Compatible AI (A term Stuart Russell uses to frame the problem) as a bookend for my book. I’ve begun a deeper exploration on A.I. and the human condition, this a continuation of a series of posts on this subject.

I have now come to the realization that human compatible AI is a problem that is inextricably intertwined with human civilization. It cannot be solved because present human civilization isn’t structured in a manner that is aligned with the needs of humanity. You cannot achieve human beneficial A.I. without drastically remaking human civilization. It is difficult enough to make the world act in concert against the threat of global warming, it will be exponentially difficult to make the world re-invent itself in the face of A.I. progress.

We have all been conditioned to believe that civilization has been designed to improve the human condition. Steven Pinker’s book “Enlightenment Now” describes many examples of how life has improved for the majority of the population. Some notable observations are “you’re much less likely to die on the job than in the past”, “time spent doing laundry fell from 11.5 hours a week in 1920 to an hour and a half in 2014” and “war is illegal”. In summary, the quality of life is getting better for everyone. The general understanding is that the advancement of technology and social equality has lead to the betterment of the human condition as a whole.

There still remains billions of people that have difficulty accessing basic needs like running fresh water. Global inequality hasn’t been solved, yet we are heading towards a new era. The looming problem of advanced cognitive technology is the possible side-effect of the destruction (through obsolescence) of many jobs. Can society fix the problem when there are no jobs for humans to do? I will argue here that it cannot be fixed until we address the inherent structural weaknesses of our current economic and social system.

Charlie Stross in “Dude you broke the future” argues that artificial intelligence already exists in the form of the modern corporation. By the late 19th century, governments decreed the right of personhood to corporations. He observes that the legal environment of today is “tailored for the convenience of corporate persons, rather than human persons.” Corporations have been granted “Artificial Personhood”. Corporations are also the “paper-clip maximizers” of the present:

Making money is an instrumental goal — it’s as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be. Corporations generally pursue their instrumental goals — notably maximizing revenue — as a side-effect of the pursuit of their overt goal.

Our capitalist system is structured to align with the needs of corporations and not the needs of humans. You cannot solve the A.I. problem when the primary owners of A.I. are the corporations. The actions of A.I. will always be aligned with the needs of the corporation and thus towards the pursuit of profits with a disregard for the plight of humans. Don’t be surprised when humans are laid off and out of work after a corporation decides that they have become ‘redundant’. Stockholders of corporations become ecstatic when cost-cutting measures are executed by public companies. There’s is little empathy towards the persons whose skills have become obsolete and is now thrown into the streets to eke out a new kind of livelihood (without any new skills).

Scott Alexander “Meditations on Moloch” has a brilliant essay where he discusses the inevitable failure of collective coordination. He illustrates several examples that include the Prisoner’s dilemma. In summary, in every scenario, there is a bad actor that defects from cooperation and ruins it for everyone else in the group. Alexander argues that the groups that survive will be the kinds that are most selfish. Groups that have a strategy aligned with the common good are likely to go extinct. He writes that the optimal solution (i.e. ‘god’s eye view’) is simple enough to understand yet is impossible to implement. Civilization cannot escape this problem and is the root cause of civilization’s unfair wealth distribution.

A basic principle unites all of the multipolar traps above. In some competition optimizing for X, the opportunity arises to throw some other value under the bus for improved X. Those who take it prosper. Those who don’t take it die out. Eventually, everyone’s relative status is about the same as before, but everyone’s absolute status is worse than before.

Alexander identifies four boundary conditions that demonstrate a race to the bottom. He enumerates them as physical limitations, excess resources, utility maximization, and coordination. Alexander is unsure if technology actually provides a solution that can prevent the destruction of human values.

Today we have corporations that are in the driver’s seat, but to make the situation even more insurmountable, each corporation is driven by self-preservation and as a consequence, the common good will be of lesser priority. A vivid example of a corporation that employed a complex strategy of acquisition and expansion is the US Pharmacy CVS. Through its affiliate company, it had a strategy that bankrupted small private pharmacies so they could swoop in to acquire these at a bargain. Companies are motivated by survival and growth, growing by acquisition is an all too common strategy. This strategy is preferred by CEOs simply because it’s much easier to execute than actually having to compete in a free market. Free markets work for the benefit of many if there is sufficient diversity to encourage beneficial competition. Similar to democracy, free markets are inefficient due to high redundancy. However, it is this redundancy that makes them anti-fragile (i.e. robust).

The only solution is that the majority of corporations needs to develop an enlightened self-interest. We see some of this happening in the software space where many companies contribute to common open source development. We also see this in corporations collectively choosing to support greener energy sources or becoming more socially aware. Perhaps our only salvation is that individual CEOs and corporate boards become enlightened enough to avoid our own human destruction.

Our governments are presently beholden to the lobbying of corporations. Many corporations’ products are intrinsically detrimental to society or the environment. History is littered with bad actors that deceptively buried the harmful effects of tobacco, lead gasoline, sugar water, transfat, fracking, offshore drilling, greenhouse gases, assault rifles, etc. We can’t expect a change when profit is the driving objective function. A corporation like any biological entity acts to survive and prosper, these corporations die if they aren’t profitable.

The newspaper business is an example of a corporation. Many newspapers prioritize the creation of quality journalism. This is conventionally balanced with the need to collect revenue. Many newspapers have gone out of business in the internet economy. It is shameful that Facebook and Google, companies that make most of their income on information dissemination, do very little to subsidize quality journalism. What remains are public broadcasting such as the BBC and NPR that have to be supported by governments. Society needs to support quality journalism and companies who are in the business of distributing information should take on the initiative to contribute to their financial viability. We have today a system that values eye-balls over truth. This is lethal to any democracy. Companies that make money selling eye-balls must be also responsible for promoting quality journalism. George Orwell wrote:

Journalism is printing what someone else does not want printed: everything else is public relations.

News cannot be ‘public relations’ for the goal of seeking profits. Selling eyeballs cannot be ‘public relations’ for the goal of seeking higher engagement. We all need to take to task companies like Facebook and Google and demand that they pay their fair share of promoting journalism. They cannot continue to exploit their users without contributing back to the welfare of their users.

The current situation with Facebook and Cambridge Analytica is an example of a corporation that has prioritized growth over ethics. One can argue that Facebook could not have achieved its success if it were not for its loose handling of personal information. If Facebook originally had strict security access controls by default, then it would be likely that it would not have grown so quickly. Facebook’s objective function is user engagement. We have seen the disastrous unintended consequences of this throughout many democracies of the world.

Startups, to survive, need to prioritize many objectives. Growth is obviously of the highest priority. In fact, every technology company that is in the public stock markets must emphasize growth. Growth here is measured quarter by quarter and not in terms of long-term growth. This short-term mindset has had a disastrous effect on the working class. Unfortunately, A.I. progress today is driven by these same corporations (and not government research). A.I. development will continue to focus on human behavior prediction and potentially human behavior manipulation. All of the biggest internet companies have revenue models that are driven by human behavior prediction. This sets the stage for society’s biggest problem.

There are many kinds of A.I. that can be developed. We can classify these into at least three kinds: computational, autonomous and social.

Source: https://arxiv.org/abs/1705.11190

The latter kind has the highest priority among the biggest internet companies. Social A.I. is the most dangerous kind of A.I., it is the kind that uses its own objectives to drive the human individual into alignment with itself. It does not align with the inherent needs of the individual since its objective is of its own profit (and not the individual). One can argue that a company that delivers a product is in alignment with the needs of the customer. The subtle reality is that a company’s products are designed to convince customers that they are in alignment with their needs. The best kind of marketing is one that convinces people to buy stuff that they don’t need. In fact, the modern economy is driven mostly by the consumption of stuff that people don’t need. Yuval Noah Harari writes in “Sapiens”:

One of history’s few iron laws is that luxuries tend to become necessities and to spawn new obligations.

Humans are driven by corporations to consume the unnecessary so that its repayment becomes necessary.

The proponents of bitcoin and other cryptocurrencies have created their own rebellion against the present monetary system. They see corruption as being rooted in human behavior and promote this idea: to re-establish fairness, automation should rule over the governance of the issuance of money. I do agree that corruption arises from human behavior. Corporations act in the way that they do because it is lead by corrupt leaders. However, machine governance is not the proper solution in that corruption is still introduced into the system. Just witness the corrupt behavior of Bitcoin miners who own a monopoly of the issuance of Bitcoin.

To take even a step further, the country of Malta is exploring the proposal to grant legal personalities to blockchain-based applications (i.e. Distributed Anonymous Organizations):

The autonomy and permanence demolishes the principle of The Nearest Person. The only reasonable way to promote the proliferation of “good” (virtual) entity-citizens is to make provision for their legal personality, and use game theoretic incentives for their creators to want to bestow their creations with “good” and lawful behaviours.

There is a profound synergy with A.I. and blockchain technology that needs to explore. However, similar to legal personhood of the past, we are creating here automation that we simply may have no control of in the future. Corporations are already subject to multiple laws and regulations. However, this has not prevented them from acting in bad faith. The laws have to bake into them ‘artificial altruism’. Free market controls are insufficient in ensuring that A.I. aligns with human needs.

It is indeed interesting that Isaac Asimov’s Three Laws of Robotics were inspired by the vows of marriage. For a refresher, here are the vows of marriage:

I promise to be true to you in good times and in bad, in sickness and in health. I will love you and honour you all the days of my life.

Well, ask yourself, should this promise be made by you to an artificial person or should it be the other way around? This is precisely the problem, with corporations, it is the other way around. The needs of the one are secondary to the needs of the corporation (the many). Let that sink in as to how your natural instinct for the good of the community is being hacked to comply with the goals of artificial persons.

A truer measure of human freedom is the extent of one’s control over one’s own time. Yet in the modern economy, the gainfully and well employed have found themselves in control of less of their own time. Time poverty is at an all time high. Artificial Intelligence in the form of corporations and future forms of automation are in fact the cages that society has invented to enslave itself.

To dodge the dangers of A.I. we have to address the existing structural flaws of the society that we have invented. This begins by recognizing that artificial personhood in the form of the corporation today and artificial intelligence-driven corporations of tomorrow is the root cause for the misalignment of society’s goals with that of human goals. We need to re-invent society with a “good attractor”.

Further Reading

Algorithmic Entities by Lynn LoPucki

Four aspects of corporate law make the human race vulnerable to the threat of algorithmic entities. (1) Algorithms can lawfully have exclusive control of a large majority of the entity forms in most countries. (2) Entities can change regulatory regimes quickly and easily through migration. (3) Governments lack the ability to determine who controls the entities cannot determine which have non-human controllers. (4) Corporate charter competition, combined with ease of entity migration, makes it virtually impossible for any government to regulate algorithmic control of entities.

--

--