How much longer can we hide behind the Law of Unintended Consequences?

©Jon-Marc Seimon, 2017

There was an op-ed piece in the Times this morning bemoaning the current state of Artificial Intelligence research—the author, Gary Marcus, contends that the scale required for the leaps he envisions is beyond the capacity of any of the labs currently conducting the research. He sees a multinational CERN-scale initiative as holding the key to getting us over the next quantum hurdle. His stirring final paragraph: “An international A.I. mission focused on teaching machines to read could genuinely change the world for the better — the more so if it made A.I. a public good, rather than the property of a privileged few.”

Why? Why is teaching machines to read something that could “genuinely change the world for the better”? Marcus doesn’t even attempt to address this—he wants us to accept that this is a fact, or assumes that his readership is savvy enough to already know the answer. Machines that can read will make the world a better place. Duh. Uh, really?

For the past year or so I’ve been harping on to anyone who’ll listen about the law of unintended consequences. New technologies have been my area of concern, and the basic argument is that as each new technological intervention is unleashed on us, there’s a depressingly familiar cycle of disbelief, excitement, adoption, indispensability, and…completely unforseen uses of that technology. Facebook >> massive platform for fake news. Twitter >> underground networks suddenly have a massive tool for communicating and disseminating their messages, even instructions, to operatives. YouTube >> suicide bomber videos. Etcetera. I’m not even looking here at do-it-yourself physical technologies, like Crispr, which can manipulate genes to change the physical world itself.

In recent months, my own shouted-from-the-rooftops warning about the law of unintended consequences has taken a buffeting by on-the-ground events. At this point we should know better. The plausible deniability that technologists insist upon to justify their efforts, the idea that technology is somehow detached from the world of right and wrong, that it can be used for either, and that it’s not even a technologist’s role to have an opinion on such things, is palpably bankrupt at this point. NPR’s Radiolab this week focused on a chilling set of technologies that enable one to create highly convincing renderings of people saying and doing things that they never said nor did. The example given of a practical application of this was of Jennifer Aniston doing a commercial which then needs to be translated to Chinese. The old way of doing that would be to create a plausible sounding voiceover, which would then not really sync with the video image. But now, just by typing in the appropriate phrases in Chinese, her own voice could actually SAY those phrases, and real-time video mapping of her face could render her speaking those phrases with convincing facial synchronization, even down to the interior of her mouth (they make a big deal about this).

One of the interesting things about the Radiolab piece was that most of the examples that the researchers were generating used politicians—Obama, Trump, Bush, Putin—to showcase their innovations. The cognitive leap that one might use these technologies for political manipulation was pre-provided by the inventors themselves! The journalist who created the piece, Simon Adler, gratifyingly nailed one of the technologists (Ira Kemelmacher-Schlizerman)unto the point of making her squirm. Ultimately, she was woefully unable to address the question of whether she should even be doing this, in light of the ways that (she acknowledged) it could be manipulated. “I’m just a technologist” was her lame answer—somehow it’s up to someone else to decide what to do with whatever it is she foists upon this world.

No, actually, it isn’t someone else’s responsibility. The tired trope that “I just invent this stuff” doesn’t cut it. We are not dewy-eyed technological ingenues anymore (and why did we think we were, in the first place? It’s hardly as if the current wave of tech is the first in human history). We are only too aware of the potential for abuse that each of these technologies contains, we are able to articulate it and imagine its implications, and—frankly—these abusive usages by far eclipse the practical “positive” applications. We’ll be able to convincingly portray Jennifer Aniston to Chinese audiences as she shills some product? Seriously? Or maybe we can conjure Steve McQueen back for some new movies. Is this REALLY worth it?

So, Gary Marcus, I hear you on the challenge facing the AI industry. A nice big push is needed to bring about the golden future. Perhaps, though, you should spend more time describing why that future is, in fact, so golden. And also, perhaps, what the forseeable downsides might be. We’re smarter, and wearier, than believing that all will just work out. The time to learn this lesson—which we should have learned eons ago—is now.