Tech companies need to go slower and stop breaking things

Silicon Valley should heed the lessons of past environmental disasters as it faces the consequences of a “move fast and break things” mentality

Thaddeus Miller
Jun 25, 2019 · 5 min read
Cuyahoga River fire, Cleveland, OH, 1952

Co-authored with Andrew Maynard.

“Move fast and break things” has become the unofficial motto for Silicon Valley and the tech industry. Yet as new technologies become increasingly integrated into every part of our society — our medical treatment, our homes, our jobs, our cities, our education, our cars — breaking “things” results in breaking lives.

In recent years we’ve seen multiple examples of how such a cavalier attitude is bad for business and bad for society: Facebook and Cambridge Analytica, Theranos, the first human fatality from a self-driving car, and, more recently, the dissolution of Google’s AI Ethics Board.

Yet we need not shrug our shoulders in despair nor call for a moratorium on new innovations. Instead, there are lessons from history about how government, industry, and individuals can grapple with the effects of technology.

A half-century ago, on June 22, 1969, the Cuyahoga River caught fire in Cleveland, OH. Lined with steel mills and industrial factories, the Cuyahoga was one of the most polluted rivers in the United States.

This would be the thirteenth such fire on the river since the 1860s. The 1969 fire, however, was covered by Time magazine and captured national attention. After almost a decade of increasing public concerns about water and air quality, the Cuyahoga River fire would prove to be a watershed moment in US environmental history. And in 1970, the foundations were laid for governmental, industry, and individual responsibility to environmental quality and public health with the passing of the National Environmental Policy Act, the strengthening of the Clean Air Act, the establishment of the Environmental Protection Agency, and the celebration of the first Earth Day.

While we still collectively struggle to address major environmental issues like climate change, there are now kayakers on the Cuyahoga and fish and wildlife have returned. What once seemed like an inevitable part of business and job growth — pollution — is now viewed as something to control, reduce and eliminate. Today, businesses, governments, and communities recognize a collective responsibility — codified in regulations, laws, and in everyday actions and social norms — to consider their impacts on the environment.

Today’s growing fears involve a pollution of a different type — the unwanted or unintended byproducts of technology in our lives. Issues like loss of privacy, social injustice, and ethically compromised artificial intelligence, mirror the litany of environmental and public health crises in the 1960s and 1970s. Despite these growing concerns though, there has not been the same societal change that the Cuyahoga River fire ushered in 50 years ago. The impacts of new technologies are often more diffuse. They don’t always lend themselves to a Time magazine spread. And they’re all too easily explained away as aberrations in an otherwise-responsible system.

Yet, the risks presented by the byproducts of technology are very real.

In 2016, the Facebook/Cambridge Analytica scandal alerted the world to how data breaches can be used to manipulate people and undermine democratic rule. Three years on, national governments, employers and other organizations are becoming increasingly adept at using artificial intelligence to predict and control our every move, feeding off the personal data we’re constantly shedding.

While the effects of data breaches and the gradual retreat of privacy can seem abstract and hidden, the impact of AI can be more visceral. In March 2018, an Uber self-driving car struck and killed a pedestrian, Elaine Hertzberg, in Tempe, AZ. Arizona Governor Doug Ducey revoked Uber’s permission to operate self-driving cars in the state while numerous other companies to continue operations with no substantive changes in oversight.

More recently, the Food and Drug Administration put out an alert warning that implanted heart defibrillators supplied by the company Medtronic could be vulnerable to cyber-attacks. Another technology, another risk. Yet the common thread is increasingly ubiquitous technologies that have the power to impact our lives in potentially devastating ways, but are being developed with a shocking disregard to who gets harmed when they go wrong.

Despite growing public concern over the negative impacts of technology on our lives, it has not been enough to galvanize effective action from the government or industry. Yet, it is clear that tech companies can ill-afford their equivalent of a Cuyahoga River before they get their act together. Just as the government, industry and individuals began to rethink our responsibility to the environment 50 years ago, we now need a radical rethink of what it means to innovate responsibly.

First, tech companies need to take the unintended consequences of their products seriously, including job displacement and biased algorithms. They need to diversify their workforce with respect to race, gender and expertise. They need to invest in responsible innovation practices. And they need to proactively invest in an education pipeline that produces innovators and leaders with the skills to think about the social implications of the products they are producing.

Next, federal, state and local governments need to build their capacities for thinking strategically about technology innovation and its potential impacts. And this includes engaging extensively with the tech sector. As the questions posed to Facebook CEO Mark Zuckerberg in an April 2018 hearing demonstrated, lawmakers were woefully unprepared to analyze the causes and consequences of the company’s data privacy practices and ensuing scandals. Lawmakers, and the publics they serve, desperately need to update their ability to analyze complex technological issues.

The wave of environmental action that began in 1970 was possible due to broad based public support and individual actions. It has been just over 10 years since the first iPhone was released. In the meantime, powerful technologies like AI, gene editing and ubiquitous surveillance are entering our lives at breakneck speed. We are still in the early phases of figuring out what our lives look like and mean when they are saturated with such technologies. For society to navigate this increasingly complex technological landscape, individuals and communities must be more mindful of the role of such technologies in their personal and professional lives. And they need to be sending a message to developers and politicians about what they care about.

These steps away will not be easy. Calling to question the benefits of technology in a culture enamored with the newest model or update doesn’t always win friends. But as we’ve seen from growing awareness around environmental impacts, they will pay dividends in the long run.

And perhaps most importantly, they will ensure our collective future is built on a culture of responsibility, rather than a trail of avoidable destruction.

Thaddeus Miller

Written by

Associate Professor, School for the Future of Innovation in Society, Co-Director, Center for Smart Cities, Arizona State University.

EDGE OF INNOVATION

Exploring the cutting edge of emerging technologies and responsible innovation

More From Medium

More from EDGE OF INNOVATION

More from EDGE OF INNOVATION

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade