Is AI Antithetical to Democracy?
I’m writing this in response to a question posed to me, one I think is well worth pondering:
Is AI antithetical to Democracy?
The questor (the one who posed the question) had his own theories, involving Artificial Intelligence and left/right politics, which also made me realize that the moment that you start throwing around movements and -isms, then intelligent discourse becomes almost impossible. To the extent that I want to consider the underlying implications, it’s worth making a few fairly big assumptions, ones that I typically try to do when looking at the impact of technology on society.
#1. Political Labels Are Misleading
A lot of arguments I see about why the world is going to hell in a hand-basket usually come down to taking a label, say Communism, Socialism or Capitalism, then using that as a cryptic short-hand for often wildly different political systems. In reality, every political system on the planet carries some aspects of each one of these as philosophies, to the extent that what would be called wildly socialist in one country might be considered heavily capitalistic in another.
Thus, in order to better understand the real implications of automation and AI (and one of the other assumptions is that these two are increasingly becoming indistinguishable from one another) you have to get down to what the true characteristics of a society really are.
In a pure Democratic society, every political decision would be made by every individual. With populations measuring in the millions, pure Democratic societies are not in fact yet technologically feasible. There are a few things on the horizon that might tip the balance from infeasible to feasible (blockchain being one worth watching closely) but it would also require a personal investment into governing that I think most people in our society would find intrusive.
This means that the kind of government that has emerged almost universally is a representative democracy. Here, the voters delegate both the law making (legislative functions) and the decision making aspects (executive functions), to specific representatives, and usually also provide a third forum for judicial functions, which typically rule upon the legality of given decisions based upon precedent and conformance to a legal schema (a constitution). The challenge typically revolves then around who has the right to elect a given representative.
There are a few countries in which the supreme executor has unilateral authority, and where decisions are made either by that individual or a controlled oligarchy (typically either a military authority or an oligarchic cabal). In effects, there are fewer checks upon power, and these nations are thus known as authoritarian. Typically in an authoritarian state, those in power cannot be removed from power except through a complete collapse of their authority — in essence, because the people that otherwise hold the levers of subordinate power no longer support that ruling individual or cabal.
Russia was a strongly authoritarian state that collapsed in the early twentieth century over the abuses of the Tsars. For a brief period of time, Russia had a multi-party system (the Menshevicks), until it was taken over by another authoritarian party, the Bolshevicks, led by Vladimir Lenin. Lenin used the ideas of Communism to espouse a belief in the power of the worker, but simultaneously believed in a strong central authority for determining who actually ended up making the decisions (which actually ran counter to what Karl Marx and Friedrich Engels had written in the work Das Kapital). Soviet Communism was Democratic at the lower levels, but once Lenin had established himself (and especially after Stalin came to power) it devolved very quickly into a pure authoritarian state.
The US is not a pure Democracy either. It emerged as a colonial power (due in no small part to the agitation of Benjamin Franklin) that was both too large for Britain to completely govern and (unlike India two centuries later) was largely occupied by former British citizens and their descendants who saw themselves as being British. They specifically railed against the corporations because it was the corporations (the British East Indies Company, the Hudson Bay Company, the companies that had formed states such as Pennsylvania, Massachusetts and Maryland) as being the source of much of their woes. This same attitude held towards banks, to the extent that formal banks weren’t chartered until the Adams administration.
A careful reading of the writings before and shortly after the American War of Independence reveal a philosophical belief that would have the most “Founding Fathers”-focused conservative today positively fuming at how “Socialist” they actually are.
#2. Established Power Fears the Power of Voting
American capitalism wouldn’t really get its foundation until the emergence of the railroads and the telegraph in the mid-19th century, and the foundation of Corporatism in the modern sense wouldn’t really happen until the 1920s with the militarization of the American populace after World War I (though it was well established in England by the 1870s).
Corporatism is a form of oligarchy, similar to but not the same as aristocracies. Aristocracies emerged in the 12th century in England on the basis that an invading conqueror (William of Normandy, as an example) would assign geographic regions to his primary political supporters (not just warriors, but also financiers) in exchange for their continued support. Power within aristocracies generally staid in an aristocracy, with the primary power of the king the ability to both raise an aristocrat and remove one (as well as the ability to control marriages).
The power of aristocrats came both from their own tax base, as well as from their ability to form coalitions, but it wasn’t really until the early 13th century is 1216 with the signing of the Magna Carta) that the Barons of England were able to establish a formal Royal Court that had power of the purse over King John. This body would eventually become the House of Lords. The House of Commons emerged later that century with Montfort’s Parliament (in 1265), which brought together burgher leaders, senior ecclesiastical members and guild leaders to represent the various cities or burghers, though it should be noted that they were considered largely a powerless advisory body until nearly the seventeenth century. Significantly, that was also when innovations in automation were laying the seeds for the second industrial revolution (the first arguably being the rise of agriculture and the subsequent evolution of cities, trading and sea faring 3500 years ago).
Even given that, the franchise for the House of Commons was originally determined primarily upon social status, and only much later was the franchise opened up to all British men (1832) and women (1921 for women over thirty, 1929 for women over 21). This mirrors the US (1870 for black men, 1920 for women over 21, and 1971 for all people over 18 years of age). This means that universal suffrage is historically a very recent concept, and a case can be made that automation has in general made it possible for such suffrage (electing one’s representatives) to take place.
It should be noted that voting provides an alternative to other forms of influence within a government, and it is one of the few that gives power to an individual who has not otherwise achieved power through inherited wealth, luck, family background or granted authority. As such, in any democracy, it is distrusted by the established oligarchy, whether that oligarchy is through divine manumit, ancestry or wealth.
This can be seen today, where the conservative alliance consists of those who are wealthy via second generation or earlier wealth, religious fundamentalists, agrarians and those in security or authoritarian oriented careers. The “liberal” alliance is mostly those who are defined as being not conservative, though there’s an amorphous sea of independent voters who either are part of smaller coalitions or who are in effect centrists. Note that this doesn’t necessarily equate to Republicans vs. Democrats, though the association is stronger today than it has been for a long time).
Thus, if by democracy one means a representational democracy with potential universal suffrage and typically a bicameral or multicameral distribution of checks and balances, then this gives a better handle to answer the question of how AI affects democracy.
#3. Current Voting Systems are Broken
When discussing artificial intelligence (AI), it is worth understanding that AI isn’t a specific technology. Rather, it is the use of a combination of computational power, databases and networks that all work together to perform specific tasks. This is what is usually referred to as Special or Specialized AI (SAIs), and today that kind of specialized AI is becoming pervasive.
Beyond SAIs you have the broader concept of a general artificial intelligence (GAI). SAIs are not self-aware, and they typically have very specific domains around which they are designed to manage computation. There are hints of GAIs in several research efforts, but they are at best very rudimentary. Ironically, the challenge with a GAI is the requirement that it can adapt to meet any conditions. Autonomous vehicle systems are perhaps the closest to GAIs, but it will still be at least a decade or more before GAIs become readily available.
However, even SAIs are (and will continue to have) a profound impact upon the relationship automation has upon our society, and especially upon democracy. One of the key aspects of such systems is that they are tools, and as tools they can be used by all sides concerned.
One of the central problems that plagues voting systems is the difficulty of ensuring that once a vote is made for a given candidate, that, somewhere in the electronic trail, that vote doesn’t get changed to a different candidate. This is one area where blockchain (not necessarily an AI tech but critical nonetheless) can be used to log a vote that’s confirmable. This solution isn’t perfect — it is still possible to spoof enough blocks to overwrite that, but doing that is extremely expensive and auditable through other methods. It also provides a way of maintaining anonymity in the voting process if designed right while still making sure that a person has not voted more than once in any given race.
This latter point also addresses a frequent shiboleth of the oligarchy — that people are voting more than once for the same candidate, or are voting for candidates that they shouldn’t be (as well as providing a test to keep programmed AIs from voting electronically). Coupled with open sourcing of the voting software and hardware, this could in fact create a true democratic solution.
At the moment, however, the voting software and hardware are both proprietary, meaning that there is no significant way of checking the logic to prevent votes from being manipulated internally. The hardware is concentrated primarily in the hands of three manufacturers, all of whom’s owners have contributed heavily to conservative politics. Attempts to inspect such systems have generally been stymied by lawsuits, and increasingly there is a divergence between exit polls and election results that bring up the potential that these systems are not in fact reporting accurate totals.
#4. The Biggest Threat to Democracy is Disinformation
In a similar arena, any democracy only functions when all of the participants have accurate information. In this case, an election is very much like a market — ideally everyone should have the same information going into the voting booth. In practice, there is a growing difference between what is being portrayed in the media (on all sides, though it’s stronger on the conservative side) and what is the situation on the ground.
There are several challenges facing anyone looking for accurate information. For starters — provenance, the origin of a piece of information, is very seldom tracked. This means that there is no way of telling whether a given graphic, story or video is in fact real or fabricated and presented as real. Again, this is an area where AIs could be used to determine characteristics from news media that indicate that the media was fabricated. This is at the very edge of what is doable now, and such veracity filters will almost certainly become more commonplace over time.
Additionally, there are few penalties attached to producing such fake news, though in the wake of the 2016 election, that is beginning to change. It is very likely that GDPR, a set of privacy initiatives for the Eurozone, will very likely strengthen criminal violations associated with the fabrication of unsourced news, and the recent implosion of Cambridge Analytica, a “data science” company that seemed to specialize in creating fake news campaigns.
On the other hand, many of those fake ads were themselves created through SAIs, which would use data analytics to identify and target social media users with content specifically intended to manipulate people’s emotions for or against a given candidate or referendum. Such “bots”, specialized AIs, are increasingly difficult to tell from human beings online, especially given the comparatively compressed nature of such media.
This means that SAIs are both part of the problem and part of the solution. The struggle between allied and enemy bots is just one more manifestation of the struggle between receiving valid content and receiving spam.
There is another aspect of AIs that need to be examined with respect to their role in Democracy. We are in the process of drawing new lines in the battle between privacy and transparency. A society cannot function when privacy no longer exists. At the same time, too much privacy can serve to hide potentially harmful actions on the part of both governments and individuals. An AI has the potential to infer behavior by analyzing often seemingly unconnected data points — though not necessarily with 100% certainty.
The same traits that make for a psychopathic killer can also show up in the personality profile of a future CEO. A system that uses such personality metrics might assign equal probability to either, yet arresting such a person on their potential for evil often fails to take into account their potential for good. Already, such systems are in use today for profiling, primarily by using dubious training data and unreasonable modeling assumptions to create an invisible bias. Again, this is a case where transparency in systems makes sense, because it allows people to examine both algorithms and training data to determine whether implicit biases have been written into code.
We already are using SAIs like this to do things like determining whether people should receive loans or be hired for specific positions. These systems are often proprietary, because they are seen as giving a company a competitive edge, but in reality because these SAIs are already deciding things that human beings would have done at one point, transparency needs to take a higher priority.
Perhaps this should in fact be the rule of thumb with regard to transparency — does the lack of transparency of a given piece of software or training set have the potential to impact the rights and civil liberties of a given person. If it does, then that code should be transparent — open source, inspectible, and reproducible.
So, in at least the short and intermediate term, it is not some ominous Cylon like AI that we have to be concerned with, it is the placement of the rights of corporations and entrenched interest groups over the rights of individuals, whether that is in the political or the economic sphere. The AI is simply a tool that helps facilitate one or the other, and it is a human responsibility, perhaps THE human responsibility to insure that AIs, like all learning children, do good, not ill, when they grow up.
I have deliberately kept the focus of this article on Specialized AIs. Once we get to General AIs, things change, though perhaps not as much as you may think. Watch this space for a link to the next in this series.
Kurt Cagle is a writer, future and software architect living in Issaquah, Washington, just outside of Seattle. He writes the Cagle Report on LinkedIn, and is a contributing editor to Future Sin on Medium.com.