A.I. isn’t the existential threat. The hubris of tech leaders is.
If private tech companies won’t accept responsibility for their influence, artificial intelligence is the least of our worries.
Some short time after the 2016 election, I had a conversation with a friend about the impact of social media on Trump’s win. I was wrestling with the role of what I thought was an inherently neutral marketplace of ideas in promoting Trump, the larger issues of Russian propaganda and interference in the election, and the general problem of abuse.
“We can’t put too much faith in technology,” my friend said. “Technology is more like magic than science.”
He pointed me to an old C.S. Lewis quote from The Abolition of Man (emphasis mine):
There is something which unites magic and applied science [read: technology] while separating both from the “wisdom” of earlier ages. For the wise men of old, the cardinal problem had been how to conform the soul to reality, and, the solution had been knowledge, self-discipline, and virtue. For magic and applied science alike the problem is how to subdue reality to the wishes of men; the solution is a technique, and both, in the practice of this technique, are ready to do things hitherto regarded as disgusting and impious…
This idea, that technology leaders are trying to bend reality to their wills, really stuck with me. And it keeps lingering as I read more about the negative effects of my favorite tech platforms on our country, and I see the so far tepid response—or outright denial of the problem—from those in leadership.
I can’t help but see Lewis’s warning about impious “techniques” applied to our information flow and opinions; to curiosity as ritual sacrifice as we build pyramids to the gods of engagement. Tech leaders have long used language like connection, conversation, community, transparency. But we’re far from the Cluetrain Manifesto utopian dream of “markets as conversations” when those conversations are dictated by an algorithmic Invisible Hand, and funded by venture capitalists looking for growth at all costs.
And yet tech leaders want to be seen as the neutral arbiters of unbiased information—without the regulation of public utilities. They want to claim to the same free speech protections given to news media—without the editorial responsibilities. And all to sell our public information to advertisers to please their stockholders. The system is rigged to get the maximum utility from the users with the least accountability.
Setting aside the moral quandaries endemic to that model, the whole system breaks down when bad actors can hack it from the outside, or everyday users can abuse others with no consequences.
Some say that we have a choice: we can choose to not be a part of Facebook’s network, to not buy things from Amazon, to not to get our news from Twitter, to not to use Google’s services. But that’s a naive interpretation of the centrality most of these platforms have in our everyday lives now — even in the objects we have in our homes. We risk losing ground personally, socially, professionally when we choose to ignore them because of their prevalence.
The giant platforms are so dominant that it’s cynical to think we can really ignore them—or that these privately-held companies have no responsibility over their influence because of individual choice. (And as in the case of Equifax, sometimes we don’t even have that.)
Harvard political philosopher Michael Sandel recently told columnist Thomas Friedman, “A century ago, we found ways to rein in the unaccountable power associated with the Industrial Revolution. Today, we need to figure out how to rein in the unaccountable power associated with the digital revolution.”
New government regulation isn’t always the answer. But there are serious holes in the accountability of private tech companies whose products are now interwoven into our lives on a scale impossible to imagine just a few years ago. (Greg Greene has an excellent Twitter thread documenting the ongoing scope of the regulatory failures inherent in our current digital environment.)
Honestly, I have fears about the ability of our 18th/19th/20th century government institutions to even handle the scope of the problem. But it’s very clear that self-regulation isn’t working.
I’ve used Google, Facebook, and Twitter for more than 10 years now. But I would gladly shut all of the services down if it meant protecting our democratic institutions. If we don’t rein in the hubris of our tech leaders soon, don’t worry about the future threat of artificial intelligence. Worry about the present threat of more Donald Trumps.