Do You Oppose Bad Technology, or Democracy?

Calls to Limit the Use of Bad Technologies Only by Law Enforcement and Governments, Largely Via “Ethics” and Self-Regulation, Exacerbate Rather than Ameliorate the Anti-Democratic Harms of Digital Technology

Recently, more of us have started to realize just how destructive digital technologies can be. That’s good. As someone who has been nearly screaming about the topic for over two decades now, I can only say that it’s about time.

Yet one of the most prominent strains of this criticism is one that we should be almost as concerned about. Among other things, it is a big part of what got us here in the first place.

This line of argument says that the solution to a technology being deeply destructive is to prohibit governments, and only governments, from using it.

Source: https://www.securityindustry.org/2019/02/21/state-legislation-to-curtail-facial-recognition-technology/

Not just “governments,” of course, but inherently, democratic governments, since authoritarian governments aren’t going to take the work of activists and critics seriously to begin with.

Only in the digital age, as far as I know, has such a perspective even been mooted as reasonable.

Rather than being a fringe perspective, it’s a core part of the most prominent ideology associated with digital technology. The ideology, called by scholars cyberlibertarianism (a term developed by philosopher of technology Langdon Winner) or the Californian Ideology (a term introduced by media theorists Richard Barbrook and Andy Cameron; media studies scholar Fred Turner has also done particularly important work in recent years tracking these politics), combines fundamentally right-wing political assumptions, including opposition to government, with, in the words of Langdon Winner, “ecstatic enthusiasm for electronically mediated forms of living.”

WIthout cyberlibertarianism, it is difficult if not impossible to understand the arguments some technology critics are recommending.

The latest and most pervasive example of this argument is found regarding facial recognition technology (FR), especially FR fueled by machine learning technology; related concerns have come up regarding autonomous vehicles, among other technologies. Critics have rightly pointed out that FR is discriminatory, perhaps unavoidably so, because there is no non-discriminatory world, let alone a non-discriminatory data set, on which these technologies can be “trained” so as not to replicate that discrimination.

Some critics have gone further and argued that FR, along with a lot of other technologies, is so invasive toward aspects of the social world that, even if it could be made non-discriminatory, it would still be unacceptable. To be clear, this is my view. The technology itself is too destructive to be deployed, at least not with very serious regulation and limitations of its deployment.

Yet by far the most widespread form of criticism about FR and associated technologies is that they should be banned for use by governments. Full stop.

They don’t say out loud what appears to be the only reasonable way to understand them, especially in the climate in which we live now: governments should not be able to use them, but corporations and individuals should be able to.

This latter clause, although not always emphasized, is often implicit and sometimes explicit in these efforts to compel democracies to block only themselves from using dangerous technologies.

That’s the context in which we have to read the opinion of Brian Brackeen, the “black chief executive of a software company [Kairos] developing facial recognition services” who says that facial recognition is “an amazing technology capable of personalizing experiences, improving interactions and creating positive feelings” but that “In the hands of government surveillance programs and law enforcement agencies, there’s simply no way that face recognition software will be not used to harm citizens.”

It’s the same context in which we should read the call from the right-wing libertarian house organ Reason, for serious oversight of the technology, but in which the “threats” posed by facial recognition are limited to its use by “police” and “governments.”

At best, we are told that companies should be constrained by “ethics boards” and “pledges” — that is, industry self-regulation — the kinds of things that almost never work without actual laws and legal regulation, because companies will always pursue profit over “doing good,” because in many ways the nature of companies demands that.

MIT Media Lab researcher Joy Buolamwini, one of the most visible commentators on this topic, and one of the most vocal proponents of industry self-regulation and (temporary) bans on use by police and governments, writes in Time magazine:

there is still time to shift towards building ethical and inclusive AI systems that respect our human dignity and rights. By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion.

Even more, the “Safe Face Pledge” Buolamwini has developed is explicitly a call for companies to regulate themselves, and to commit to “not facilitate secret and discriminatory government surveillance” and “mitigate law enforcement abuse.” The earlier “Perpetual Line-Up” project sponsored by the Georgetown Law Center on Privacy and Technology is similarly focused. Those sound good until you reflect on what the words “government” and “law enforcement” are doing in those clauses. At a very literal level, there is no other way to read them except to say that these practices are acceptable unless government is doing them.

(In this regard, it is hard to be surprised to learn that MIT Media Lab, for which Buolamwini works, served as an incubator for at least one FR company, a particularly disturbing one named Affectiva that measures human emotion with FR, and which emerges directly out of the Media Lab’s work on “affective computing.”)

This is not just the wrong solution: it is even more dangerous than the present situation. Not only does it leave companies free to decide what constitute “applications that risk human life,” but it suggests that they can do this without law or regulation (words that occur nowhere in most of Buolamwini’s articles or in the Safe Face Pledge). It is on the order of making Google publicly pledge to “don’t be evil.” How can we at this point not realize that these companies use self-regulation to their own advantage, and use anti-government rhetoric to prevent democracies from constraining them?

This is exactly the danger of cyberlibertarianism: rather than directing critique at the thing which is actually harming us, that critique is redirected toward the thing whose job it is to protect us from that harm.

It’s not as if this problem is confined to FR. It’s endemic to the digital world. Two of the most prominent “digital civil rights” organizations, the Electronic Frontier Foundation (EFF) and the Center for Democracy and Technology (CDT) are well-known, at least among the few of us who resist their self-depiction as primarily interested in human rights, for being primarily committed to deregulationism, especially when anyone in the US government pushes back against Section 230 of the Communications Decency Act and suggests that the incredibly deregulated space in which internet companies operate should be contracted. EFF is in many ways the poster child for the danger of cyberlibertarianism: it tells the public — and many in the public believe it — that its main interest is in promoting “privacy.” Yet you don’t have to read very far at all in EFF’s material to see that it construes “privacy” as something that is generally violated only by “governments,” and that while at some level EFF would prefer that companies respect individual privacy, they also vigorously oppose nearly every effort by governments to demand that.

This is why EFF’s opposition to FR seems characteristically focused on its use by law enforcement; why EFF’s senior personnel curiously circumscribe their concerns to “government spying,”

Twitter profile of Jen Lynch from EFF

and why EFF can even turn against a major industry figure like Mark Zuckerberg — bizarrely and in a disturbing echo of populist rhetoric (an echo that is not unusual in EFF’s activism) — when he dares to suggest that regulation is the only remedy to the nightmare hellscape that the digital world has become. Remember, EFF still believes that what we have now is an “internet” that is so precious that its pro-industry campaigns are typically larded with claims that “the internet will break” or “the internet as we know it will end,” as if the internet as we know it — that is, the one in which certain corporate actors are able to act with near-absolute legal impunity — is so precious that we should sacrifice many other obvious and critical human rights to keep it just the way it is. Even when thoughtful journalists, activists, and scholars point out this obvious aspect of EFF’s apparent “activism,” they maintain the same pro-corporate, anti-government position.

While these calls typically focus on law enforcement, they seem not even to acknowledge the structural function of law enforcement in government itself. Government is made of laws; if you deprive government of the ability to enforce law via one mechanism or another, you unavoidably oppose government itself.

No matter how dangerous a given technology is or might become, there is nothing more dangerous to the human fabric right now than to tear apart democratic governance. And this has been a major effect, if not always necessarily a primary goal, of cyberlibertarian ideology. Democracy is under major threat today in a way few of us alive thirty or forty years ago can ever have imagined. Digital technology, and even more so the political ideologies that enable the growth of that technology, has turned out to be central to antidemocratic forces.

Yet even now, some of those who claim to recognize that threat are literally arguing that democracies should respond to that threat specifically by constraining democracy itself, while not constraining dangerous technologies that have clearly antidemocratic affordances.

Zoé Samudzi offers a pretty pointed analysis of the facial recognition debate, arguing that “it is not social progress to make black people equally visible to software that will inevitably be further weaponized against us.” And in the discussion of Google’s abortive effort to create an “AI ethics board” (actually an Advanced Technology External Advisory Council) — that is, to rely on corporate self-regulation to push into areas of technology that even Google recognizes have potentially dire consequences for everyone — MIT Technology Review asked 14 corporate and academic experts what Google should have done instead. Only 3 of the respondents, academics Os Keyes and Anna Lauren Hoffman and writer Adam Greenfield — none of them affiliated with a corporation or a corporate-funded research institute — call for regulation over or an outright ban on the technology. Worryingly, many of the respondents, at least in the excerpts provided in the article, appear to think that industry self-regulation done right would be adequate to addressing the problems with facial recognition and other technologies.

Asking governments to enact legislation that bans only governments from using technologies is at best odd. Without regulatory bodies, governmental agencies are the only entitles over whom governments can exert oversight. It is improbable that many legislative bodies would enact laws that say “we can’t use this tech, but anyone else can.” It’s not even clear what that would mean in practice. Even if the local police or the FBI is prevented from actively deploying facial recognition by its own employees, would it also be prohibited from purchasing or subcontracting those services to a private company that uses them completely legally? And if that were prohibited, would the prohibition extend to the police purchasing the results of facial recognition use, especially if the company — to pick one out of a hat, say Palantir — used terms like “proprietary methods” to black box the services it provides? And if Palantir offers a “suspect identification” service to law enforcement, the work involved in piercing its legal veil of trade secrecy and active legality to show that the police are knowingly purchasing a service that is illegal only for them to use… now, ask about the police using private investigatory services who subcontract to Palantir… while it is conceivable that in some perfect world that all of these loopholes could be plugged, the mechanism to do that would look an awful lot like a regulatory body. Some of these issues are evident in the recent proposal by San Francisco legislators to ban law enforcement use of FR, where exactly how the lines around “law enforcement” can be drawn turns out to be a pretty major problem.

We don’t even have to speculate about things like this happening. Just a few weeks ago, EPIC sued Google for warrantless searching — for doing things a company can do because it’s not constrained by law, and then selling or giving the results to law enforcement. Even if your concerns are exclusively related to what government does with bad technologies — concerns which I urge you to reconsider — nominally barring only governments from directly using those technologies won’t produce the effects you want.

Not too long ago, it would have been unthinkable to suggest banning certain technologies, let alone developing regulations and regulatory bodies to enact them. Yet recently we see at least some murmurings that suggest this is no longer a complete pipe dream. Frank Pasquale, for one, has been a pretty isolated voice in the wilderness since suggesting a “Federal Search Commission” as early as 2007. Pasquale also thinks FR and other machine-learning-based identification technologies are unacceptable across the board or at least without much greater transparency. Evan Selinger and Woodrow Hartzog have spoken out strongly in favor of banning FR altogether, and I suspect they agree with me that the only way to make a ban effective is to have a regulatory body that can implement it (and other bans and regulations like it). A recent piece in Vox argues that “some AI just shouldn’t exist,” based on some of the excellent research by folks mentioned here about many of the inherent biases in machine learning systems, and there is no way to get to tech like this “not existing” without a regulatory body to make sure that doesn’t happen. And critical theorist Nick Srnicek recently suggested in The Guardian that public ownership or some other form of heavy regulation is the only way to curb the destructive effects of big tech.

Some major democracies are heading in this direction as well. Canada has strong privacy laws and a Privacy Commissioner that oversees them, although those laws need to be strengthened. The EU has recently put into place the GDPR, filed antitrust actions against some of the tech giants, worked to enact other regulations, and threatened to do more, all of which the tech industry and its “activist” lobbying organizations have distorted and protested to the hilt — as it does with all regulation, and as it does even when it sends out representatives (such as Microsoft’s Brad Smith) to kind of support regulation. The UK’s Information Commissioner’s Office is exemplary in its attempts to rein in the tech industry, and has recently suggested much more regulation is necessary. In the wake of the Christchurch massacre, the New Zealand privacy commissioner has been talking about very heavy regulation of the industry.

(There is, in fact, a bill that was recently proposed in the US Senate called the “Commercial Facial Recognition Privacy Act of 2019” that purports to constraint the technology for commercial uses only, and the text of which is instructive about the challenges inherent in trying to craft individual pieces of legislation to deal with destructive technologies, but I agree with Os Keyes that the bill is “milquetoast” that needs “substantial amendments and strengthening” — not least because it is one of those opt-in systems relying on the notion of “user control” that Woodrow Hartzog, among others, argues is an ineffective framework for understanding privacy harms.)

Yet in many ways, because of Section 230 of the CDA and the way that the US-based tech corps have been able to leverage its power against other countries, unless we enact similar regulations here, our ability to constrain what these companies do will remain limited — here, but elsewhere as well.

We need democratic control of technology. We do not need democracies to step back even farther from using their powers to constrain technology. Those powers are democracy, in one of its only forms that mean anything.