Thinking about thinking about what to do about technology

A number of events in 2017 have caused more people to do what few people have done until now — ask whether mechanisms and media billions of people have adopted enthusiastically might be more harmful than helpful. A few examples include recent revelations over the use of Facebook as a conduit for weaponized AI propaganda to tip elections and manipulate the public sphere, the widespread recognition that a visible and global fraction of the human population is walking and driving while looking at screens, the failure of Equifax to protect the vulnerable private information of people who never gave it permission to collect that information in the first place, the use of Twitter to threaten nuclear war — by someone with the power to launch it. As someone who wrote in 1993 about thinking critically about (what are now known as) social media, I’m glad the conversation as suddenly become less lonely. I don’t pretend to have definitive answers, but perhaps I can help point the conversation toward more possibly constructive directions by suggesting a few fundamental questions.

When I wrote The Virtual Community in 1992, although I was enthusiastic about the personal empowerment and democratization of culture that digital media and networks offered, I was concerned enough about pitfalls to entitle the last chapter of that book “Disinformocracy.”

After studying a few of those who have been thinking critically about technology (Lewis Mumford, Jacques Ellul, Ivan Illich, Langdon Winner, Neil Postman) and thinking about what to do with the tools that are both enchanting and endangering us, I come up against a few questions that might help frame conversations about not only what has gone wrong, but what can be done to make it right.

Questions about the threats of technology often come down to the nature of capitalism: The microtargetted advertising that makes Facebook a conduit for hyperpersonalized propaganda is precisely what makes Facebook such a valuable medium for paid advertising — which is what returns profit to Facebook’s stockholders. So what can be done about that? Some argue that because communism failed, there is no alternative remedy. Yet we are seeing potential alternatives beginning to emerge: while platform cooperativism and profit-from-purpose businesses are relatively new, successful cooperative corporations have existed for more than a century. What other models can be added to this list? Can any central principles or points of leverage be inductively derived by examining these alternatives.

Identifying a problem may or may not be a step toward solving it. The problem remains: what can be done? For example, Tristan Harris has been stimulating discussion about the engineering of a distraction economy by designing attention-addicting affordances into apps that make money by selling our attention to advertisers. Setting aside for the moment what to do about the production of attention-sucking media, educating people to exercise control over our attention is a well-studied and effective means of preparing people to defend themselves in the war over our minds — but is hardly an integral part of either public or private education. Even knowing what to do doesn’t mean solutions, prophylactics or countermeasures can be effectively implemented.

Is there a role for regulation? That sounds tricky. Advertising has been based for a long time on using media to capture and hold attention. More fundamentally, computing pioneer Bill Joy warned way back in 2000 that self-reproducing robots, nanotechnology, and desktop bioengineering might, in twenty or so years (i.e., right about now)threaten human survival. His suggestion for what to do about it: relinquishment. I personally don’t think that any social or political agreement to relinquish research into these fields could be reached and certainly would be difficult to enforce on a global basis.

How can we foresee significantly dangerous side-effects of emerging technologies? Certainly the invention of refrigeration at the beginning of the 20th century was a huge boon to humankind, allowing the safe preservation of food on a global scale. But the designers of early twentieth century refrigeration systems weren’t aware that decades layer the chloroflourocarbon refrigerants escaping to the atmosphere would threaten the ozone layer that protects life from some forms of destructive radiation. Foresight doesn’t seem to be much of a priority in the United States: the relatively miniscule budget of the Office of Technology Assessment, a nonpartisan think-tank that convened stakeholders to try to look ahead at the possible effects of new technologies (e.g. OTA’s 1986 report on Critical Connections: Communications for the Future) was zeroed out by the Gingrich congress.

Then there’s the question of values. If there was a way to enforce a particular means of designing, manufacturing, deploying, profiting from a technology, what ethical or moral grounds would such regulation rest on? We’re seeing radical disagreement regarding the morality of stem cell research or reproductive technologies; all sides of these arguments claim a moral priority. What criteria could be agreed upon to resolve such disagreements?

Not only are these not answers, the list of questions is incomplete. What should I add — and why? What might I change in the present draft — and why?