Think Regulating AI is Hard? Wait for Nanotech, Quantum Computing, and Synthetic Biology.

Why reconsidering our role as a species is critical to designing the future.

Digital Diplomacy
Published in
6 min readApr 19, 2018

--

A week after Mark Zuckerberg wrapped two days of testimony to the Senate Judiciary Committee (embarrassing for everyone involved), and as debates about algorithmic accountability long confined to academia get their day in the sun, it may be useful to consider a simple truth about a complex problem: AI challenges our relationship with machines.

Long mythologized and fantasized as a realm separate from but dependent on human agency, technology is no longer something we can think of in binary— man or machine; robot or person; either we control it, or it controls us. We know that technology is with us, on us, and in us.

Inextricably linked to our smartphones, fully dependent on GPS navigation, and increasingly ceding decisionmaking authority to algorithms, it’s time to admit that we’ve crossed a line of no return. The infrastructure we rely upon to connect us to each other and to global flows of information is now inseparable from our actual bodies. We are all already cyborgs.

We need to regulate AI. Now what?

One of the only things Congress and Mark Zuckerberg could agree on last week was the need for increased regulation.

As Zuckerberg kept qualifying, “the details matter.” But the broad strokes matter too.

The analogies, metaphors, and legal precedents we operationalize now in problematizing the challenges of nascent technologies are incredibly influential to the regulatory structures we will build to cope with ongoing dynamics, which in turn will bias and constrain our approach to even more challenging technologies in the future. Current conversations about data ownership, security, and reality distortion are just the tip of the iceberg, and yet we are already out of our depth.

When we find ourselves without adequate metaphor, we have a moment of necessity-driven opportunity to closely examine our constructs at a fundamental level, asking what’s at stake in the policing of new systems even as they collide and intersect with existing ones in real time.

Move fast and break…everything.

While a call for examining deep structures and closely held cultural beliefs may sound like trepidation, time, itself a construct, adds an additional layer of complexity. The telescoping rate of technological change gives us less and less time to react to the very real consequences of these collisions, which we so far have little opportunity to accurately predict, anticipate, or imagine.

In our current state of reacting to tech that is already moving faster than our regulatory structures can explicitly adjust to, we are too often left with regulations, interventions, guardrails, and “solutions” hastily assembled by a non-inclusive, arguably non-representative, and, as we learned last week, non-technically savvy minority.

In the wake of Cambridge Analytica’s use of Facebook data to develop targeted political advertising, these interventions so far include a rushed deal with philanthropic funders to support select platform research on election influence. This explicitly won’t be scoped to look retrospectively, meaning we’ve already lost our best (but luckily not only) opportunity to understand the depth of manipulation (information warfare) that went on leading up to the 2016 US election, not to mention other elections that have been effectively experimented on by Facebook directly.

Elsewhere, there are some hasty tweaks being made to the FB ads approval process and interface that may or may not suffice in lieu of overdue meaningful campaign finance reform. And over and over again last week, we heard our elected representatives promise to institute statutes and regulations that might somehow mitigate current challenges (in spite of lawmakers’ obvious inability to understand the technology they’re planning to regulate or the actual challenges it poses).

Conversations about regulating Facebook and other internet utilities quickly bump up against issues of human culture. For example, last week’s debates crystallized our difficulty imagining how to police “fake news” without endangering free speech. In fetishizing technology, and continuing to think of it as distinct from ourselves, we’ve created seemingly uncrossable canyons, in which available precedents and metaphors are hopelessly ill-equipped to cope with the complexity we are mired in.

The unintended consequences, system dynamics, implicit biases, and ethical dimensions left unexamined in the rush of reactive policymaking and solutioning inevitably leave the most vulnerable and marginalized to suffer the worst consequences. Even when approached with good attentions, any project done hastily, for us without us, will get it wrong every time (an axiom i’m borrowing from disability activist Dessa Cosma).

In particular, promises of “AI solutions” (essentially Mark Zuckerberg’s entire argument to Congress: wait for AI to fix it) both obscure the space for potential interventions (by promising to debug what actually amount to features of the system), and promulgate a problematic fantasy that machines don’t need our help to regulate our interactions. In reality, our data, debates, and decisions are the design parameters and training sets that define AI. There is no machine learning without human agency.

AI challenges our relationship to machines. Synthetic biology will challenge our relationship to ourselves.

Some day in the not-too-distant future, our debates on how to regulate something that looks like infrastructure/utility but is actually an embodiment of our culture and society will seem quaint. We won’t be asking anymore whether algorithms should monitor, police, design, and control human behavior. Those questions will have been answered, for better or worse. We’ll be asking a new set of questions, about what’s at stake in a world where life itself can be designed.

What does it mean to be human, “natural,” or even alive? Does it require being born?

Long before we can engineer fully fledged never-born humans, we will be challenged to define the line between human and nonhuman. Indeed, these challenges already arise in the lab. Gene splicing, the growing of human organs in animal bodies, and other already ongoing adventures in the uncanny valley provoke a host of ethical, moral, and legal questions. What defines the distinction between human and “animal”? Why have we drawn the line where it’s drawn? And perhaps most fundamentally: why have we drawn a line at all, and what power dynamics are implicit in the dichotomy that might uphold a suicidal ecological ideology?

It’s easy to imagine this set of questions because they still feel like they belong to a far off future. It won’t be easy to answer them, but it’s even harder to ask them collectively. And they’re not very far off at all.

It gets even weirder.

Ditto to all of the above for quantum computing, which will put the AI regulation debate onto an epistemological dose of adderall while further disrupting our basic tenets about the nature of reality. In addition to massively propelling computing capacity and speed, the reality of quantum computing will force us to realign our understanding of the way the universe works to account for extremely weird phenomena like time entanglement.

Ditto too for nanotechnology, which will challenge our relationship to nature, and further challenge our dichotomy between living and nonliving. By turns promising to usher in an era of planetary abundance or collapse, our fantasies, metaphors, and narratives about our role as designers and ecological dominators will be fundamental to how we approach and regulate this area of tech.

And when quantum computers run algorithms that predict applications of quantum dots for neurological synthesis, we’ll be in a brave new world.

Toward a “paleontology of the present.”

Borrowing from Latour, I call these near-future challenges “crises of hybridization.” Recognizing their common threads forces us to ask a new set of questions about what’s at stake when we transgress the boundaries we’ve carefully maintained between things like nature & culture, man & machine, living & nonliving. Asking and answering those questions as collaboratively as possible is the best path toward regulating the most problematic impacts of inegrating new technologies with existing systems.

In short, we need more than engineers, politicians, and lawyers involved in building and regulating new technologies. We need anthropologists, speculative designers, artists, ethicists, and sci fi authors to work in and with tech. We need community engagement methods that can bring developments from the lab into the public square. We need civic education that accounts for a technologically mediated and reshaped concept of common good.

And we need to revisit the fundamental assumptions, metaphors, and dichotomies that uphold our problematic relationship to the planet we live on.

The views I’ve expressed are my own and do not necessarily reflect the views of my employer.

--

--

Michelle Shevin
Digital Diplomacy

Tech Fellow at the Ford Foundation. Adjunct on futures thinking at NYU ITP. Dancing ghost in my machine. All views my own.