How Google is Remaking Itself as a “Machine Learning First” Company
Steven Levy

Regarding ML and Dystopia: I think there’s a certain (low) level of cognitive dissonance that people, particularly engineers, are intolerable of when it comes to the consequences of their work. Most engineers I know (myself included) want a better world: one with less material suffering for the less fortunate, less stressful interaction with machines, and less threat of small human error ruining it all (e.g. giving Trump the keys to America’s nuclear arsenal). We’re all techno-utopians, at the end of the day.

But I think that this kind of cognitive dissonance is the source of the ironically ill-informed “ill-informed Cassandras” point — it’s too hard for us to admit to ourselves that our livelihood is inching us closer to a world that we ourselves don’t want. There are very real and very unanswered questions concerning what will happen to the world when AI-powered medicine, AI-powered transportation, AI-powered finance, etc. hit the market. How will ML address pressing concerns about climate change, overpopulation, socioeconomic stratification, threats to personal freedom? Moreover, how will ML ensure any form of real justice, liberty or quality of life for the commoners when they are owned by self-interested capitalists? It seems to me that ML is destined to become a tool for social control, surveillance and economic stratification.

Any good marxist will tell you that as long as the means of production (aka the algorithms we’re designing for our employers) are privately controlled, their socioeconomic power will first and foremost benefit those at the top of the already narrow pyramid of wealth.

Like what you read? Give Derek a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.