Towards a Better AI Conversation: Why It’s Urgent to Distinguish Between ANI & AGI

Ben Gansky
11 min readJun 23, 2018

--

L to R: Zeid Ra’ad Al Hussein, Sam Altman, Eileen Donahoe. Photo courtesy of Stanford GDPI

“Perhaps you could just be a little bit less ambitious,” chided Zeid Ra’ad Al Hussein.

“Oof,” responded Sam Altman.

Al Hussein is the United Nations High Commissioner for Human Rights; Altman, the President of Y Combinator and the co-chairman of OpenAI. The occasion for this exchange, a recent Stanford University-hosted conference on Human-Centered AI, offered optimism, warning, and nuanced discussion on the relationships developing between AI technologies and civil societies. At some points, the representatives of these domains demonstrated, if not coordinated answers to tough questions, at least ideological and strategic alignment about how these questions might be approached. But at other moments, the gap between tech and humanities, in their ability to understand one another and to make themselves understood, yawned. No other moment in the day-long conference so clearly illuminated that understanding gap as ‘oof.’

Altman and Al Hussein’s conversation was moderated by conference organizer Eileen Donahoe (Executive Director of Stanford’s Global Digital Policy Incubator and former ambassador to the UN Human Rights Council under President Obama). Donahoe entered the conversation, and the conference, with a clear thesis: the existing corpus of human rights treaties should be recognized as the best, most functional enumeration of human values that we’ve got, and as such, should be recognized as an (if not the) invaluable resource for the technologists facing the challenges of imparting unto artificially intelligent machines a set of human values. In her opening remarks, Donahoe set out to offer that set of documents as a starting point for the definition of human rights in the context of this conversation around Human-Centered AI.

What I’d like to focus on in this brief essay is the other part of that concept, the AI part. As AI becomes an ever more ubiquitous topic in ever more diverse contexts, it is important that all parties are equipped to have this discussion. That means shared definitions, at the very least. So let’s discuss about how we talk about AI, using this encounter — between the human rights commissioner and the entrepreneur/technologist — as our case study.

Artificial intelligence is a term whose definition is so broad as to approach useless. It’s almost like ‘weather’ (and becoming nearly as ambient). There are two very different things, broad categories, that people tend to mean when they say AI. One is Artificial Narrow Intelligence (ANI). These are programs that focus, as you might assume, narrowly. They’re one-trick ponies, and the trick that they are spectacularly good at is prediction. A game-playing ANI (chess, Go, Dota 2) is phenomenally good at predicting its opponent’s moves, and based on those predictions, choosing its optimal course of action to compensate and overrule. When you dictate text into your phone, an ANI exercises its remarkable ability in order to predict, based on the sounds you’re making, what words are likely to be your intent. Crucially, the algorithm that is so adept at chess is absolutely useless at turning your voice to text. ANI is deep but narrow. Important: we are already surrounded by various forms of ANI, and more are being developed every day.

The second thing that people mean when they say AI is Artificial General Intelligence (AGI). This is a flexible prediction machine, one not constrained to any particular domain. So, you might imagine it as wide but shallow — though you’d be wrong. The thing about an AGI is that it is so wide that its expertise includes the ability to teach itself to be better, smarter, more efficient. As it gets smarter, it gets better at learning, so the smarter it gets, the better it gets at getting smarter faster. This exponential spike means that an AGI’s progress would be so rapid that it could leap from the intelligence of a toddler at dinner to that of an Einstein by dessert. And it wouldn’t stop there — why should it? Human-level intelligence would be an arbitrary stopping point. Instead, a self-teaching AGI would eclipse human intelligence, quickly and conclusively. The result would be a superintelligent entity, one that has advanced so far beyond the scope of a human’s brainpower that it no longer simply outperforms us, it is beyond our ability to even comprehend. One author likens the difference between an AGI’s capacity and a human’s capacity to the difference between a human and a bumblebee.

It is impossible for us to imagine what such a superintelligent entity could, or would, do. But just to spark your imagination, here’s a thought experiment, related to Arthur C. Clarke’s dictum that any sufficiently advanced technology is indistinguishable from magic. Imagine the most fantastical magic you can: teleportation, creating objects out of thin air, infinite energy, resurrection. The technology that an AGI could invent and manufacture would be capable of feats beyond those, beyond whatever you imagine magic could do. The implications are mind-shatteringly enormous. (But we’ll return to that.) For now, let us say that AGI is extremely wide and unfathomably deep. Important: AGI (obviously) does not yet exist. Experts are divided in their estimates of its ETA, but it’s safe to say it will not arrive any time within the next decade — some say the next century.

OK! So we’ve got on one hand ANI: already here, already an immensely powerful and flexible toolset, with important breakthroughs still being made, and the distribution of its effects still in their infancy. On the other hand, AGI: not yet here, perhaps arriving in 10 years, perhaps in 100, and when it arrives it will indisputably transform the lives of all members of our species at the deepest and most profound level.

Sam Altman is obsessively focused on AGI. After making his fortune at a young age (“I was fortunate enough to retire at 26”) he became aware of, and unable to ignore, the utopic/dystopic stakes of AGI development. He, along with Elon Musk and a group of leading AI researchers, founded OpenAI as an AGI-focused non-profit, their best bet for the responsible development of these capabilities.

OpenAI is a leader in the field of AI development, both technically and, it’s probably safe to say, morally. Sam Altman is its venerated chairman. So on the surface, it appears as though he would be an ideal representative of the AI field with whom a human rights advocate could collaborate. As the United Nations High Commissioner for Human Rights, Zeid Ra’ad Al Hussein’s tenure has been distinguished by a willingness to name names of countries and leaders actively creating or passively allowing atrocities and human rights violations. He has also invited controversy by publicly questioning core neoliberal articles of faith, to the extent that the Trump Administration recently announced that the United States will be leaving the UN Commission for Human Rights. Al Hussein, then, also seems like an ideal representative, unbound to dogma, to conduct a cross-sector dialogue with the tech field.

Donahoe, the moderator, posed the questions: how can existing human rights frameworks be applied to AI development, and how can AI support the realization of those frameworks in the world? As you will recall, though, AI can refer to either or both of the two broad categories of the technology, ANI and AGI.

Within a few minutes, the conversation between Altman and Al Hussein took on a tension. There was no outright hostility, but it was increasingly evident that this was a conversation between parties who may have respected one another, but in no way felt themselves to be understood by each other. Nor did it seem that they could find their way to making themselves understood.

Al Hussein picked up Donahoe’s thesis about the value of human rights treaties with regards to developing human-centered AI. Altman’s response was that while people have historically somewhat successful at articulating human rights, humans have been historically worse at actually abiding by those principles. To Altman, the more realistic route is for an AGI to learn humanity’s “value function,” to be able to deduce for itself the ‘meaning’ of human rights and values. Al Hussein rejoined that Altman made him feel very “analogue.”

With each reframing and extension of these questions and discussion points, Altman was responding in the context of AGI (broad and deep, superintelligent, ETA ten to a hundred years from now), while Al Hussein and Donahoe were processing on the order of ANI (narrow and deep, domain-specific, already deployed and in active development). Asked about the potential for AI to address human rights challenges, Altman responded in the framework of AGI, which is to say messianically, invoking with the dawn of AGI a magnitude of societal change without adequate comparison in the whole of human history. (How do you respond to that? Donahoe said ‘fascinating’ a lot; who could blame her?) When challenged by Al Hussein about the extent to which Altman’s speculative paradigm of cultural and political changes was focused on US domestic contexts (and the global North), Altman responded that those in the global South would be the ones who stand to gain the most by AGI. And so it went.

And that was a shame. As Carnegie Mellon professor of robotics Illah Nourbakhsh (among others) has written in the past, the tendency to put on a pedestal the potential for a world-shaking AGI and count on it to solve all of humanity’s problems can create a kind of near-sightedness among AI innovators. Rather than taking up the available (ANI) tools with which to address current (urgent) struggles, the emphasis on a 10 to 100 year ‘event horizon’ (Altman’s phrase) registers as privileged, naive, and ungenerous.

Donahoe and Al Hussein may or may not have had the requisite jargon to explicitly ask for answers in terms of ANI in addition to AGI, but I can’t fault either of them for that. I lay the blame for that frustration on Altman, who either would not or could not adequately frame the discussion for his interlocutors, who struggled mightily as a result.

Several times over the course of the hour, Donahoe or Al Hussein pleadingly stated that they had trouble grasping the implications of an AGI as described by Altman. And Altman just kind of nodded along. At one point, Donahoe said, “It seems like you’re saying it will be like an alien presence?” to which Altman replied, “Yes, it will be like an alien.” Not helpful. (Yes, Sam, I know that it is technically impossible/inaccurate to offer an assessment of ‘what an AGI will be like/do’, but be a little generous and offer a metaphor or something, for god’s sake.)

It was disappointing to say the least to watch this struggle, as the High Commissioner simply may not have known the frameworks to reference in order to ask for what he really wanted to know about: near-term ANI solutions to world human rights issues. To this extent, I saw this impasse as occasioned by Altman’s willful unwillingness to divert the conversation away from AGI at least long enough to communicate: ‘You want to talk about ANI. I’m the AGI guy. Different thing. But here, let me give you the courtesy of a brief orientation to this field.’

I can’t fully attribute the tension in the conversation to Altman, though. Grasping the magnitude of AGI’s implications is a stupefying task. One’s mind tends to reel, and would much rather grasp onto objects and concepts much closer to our lived experience. Yet the truth is that the overwhelming majority of AI researchers agree that AGI will be created in our lifetimes, and that its impact will indeed determine the fate of our species. To refuse to address this eventuality, because it is overwhelming, is a mistake. High Commissioner Al Hussein could have gone deep with Altman on the human rights implications of a god-like AGI, or at least have attempted to navigate that conversation. (At any rate I would have been fascinated to hear Al Hussein’s take.) Al Hussein instead avoided that conversation in deference to nearer-term issues, leading to his suggestion that Altman could solve pressing problems that already existed — smaller problems, perhaps.

“I came through the airport here and, like everyone, I had to stand in line at passport control. No-one likes standing in lines. Perhaps this is a problem that AI could solve?” (Altman is visibly grimacing at this point.) “Perhaps you could be just a little bit less ambitious?”

When it comes to solving grand challenges like global human rights abuses, can less ambition really be the answer? I think not. Rather, this conversation, and those like it happening with increasing frequency between representatives of diverse sectors and backgrounds could benefit greatly from clear communication frameworks and shared definitions.

It is clear that near- and present-term ANI technologies can be brought to bear against existing challenges in order to make individuals and communities safer, more resilient, and healthier. Indeed, these projects are proliferating on a global basis already. (Almost as an aside, Altman stated that if all AI innovation were halted today, the distribution and scaling of already-discovered ANI principles would transform society.) It would be nice if the Sam Altmans of the world (meaning apostles of AGI) would acknowledge that on a more regular basis, and dignify that work with the recognition and support it deserves. (Conversely, those same AGI-boosters must acknowledge that although the theoretical downsides of AGI are attention-gettingly apocalyptic, the more near-term potential negative effects of ANI are also consequential and potentially massive.)

Simultaneously, pragmatic members of the human rights, humanities, and public sector communities must acknowledge and confront the monumental challenge represented by the ever-closer advent of AGI. Little wonder that Altman had little patience for any diversions from the topic — the stakes couldn’t be higher, nor the issue more urgent. We in the humanities (I count myself among this group) must find a way past the mind-lock to better grapple with these challenges.

It’s also essential to acknowledge that giving oxygen to a discussion around ANI does not take away from AGI efforts — this isn’t a zero-sum situation. As Institute for the Future’s Miles Brundage put it in a recent podcast:

There’s probably more common cause between the people working on the immediate issues and the long-term issues than is often perceived by some people who see it as a big trade-off between who’s going to get funding, or this is getting too much attention in the media. Actually, the goal of most of the people working in this area is to maximize the benefits of AI, and minimize the risks. It might turn out that some of the same governance approaches are applicable. It might turn out that actually solving some of these nearer term issues [ANI] will set a positive precedent for solving the longer ones [AGI], and start building up a community of practice, and links with policy makers, and expertise in governments. There’s a lot of opportunity for fusion.

This one conversation at one conference was illustrative of a deep need. As the importance of cross-sector collaboration between technical fields, the humanities, and the public sector has become more evident, so too have the challenges. Chief among them is the problem of communication. This problem is amplified when the need to communicate accurately and accessibly is extended to the public sphere. Artificial intelligence is a field undergoing rapid development and change — and that probably won’t stop being the case anytime soon. Yet the stakes are too high for society for non-technical individuals and communities to sit this conversation out. Let’s start communicating better around AI, and let’s start by distinguishing between ANI and AGI — technologists and non-technologists alike. Maybe that way, we’ll all together be able to reduce the total amount of ‘oof.’

--

--

Ben Gansky

Ben Gansky translates complex ideas into accessible, imaginative, and emotionally-rich experiences. His focus is the intersection of tech, policy, and culture.