How to Build A.I. We Can Relate to

Writers are the key to creating a frictionless future worth having

Maria Farrell
Jun 25, 2018 · 7 min read
The brainphone. Image credit: Elizabeth Mahoney.

My dog loves to play with his toy snake by repeatedly breaking its neck. He prefers traveling on the top of London buses so he can see what’s happening in the world, and he obsessively checks pee-mails and sex-messages from other dogs as we walk around the neighborhood.

His seriousness about all these deeply silly things makes me imagine how a superior intelligence might patronize pet humans, keeping us entertained in a captivity we were barely even aware of while chuckling at our antics.

There are three main ways we imagine encountering a truly novel intelligence: another known species in a different genus or kingdom (dolphin, octopus); alien intelligence; or native, sentient artificial intelligence. And while we worry and invent nightmares about killer machines, predatory aliens, and creatures turning on humans, we also dream of bridging the cosmic loneliness as the only entities we know of who can hold a proper conversation. “No one has ever loved anyone the way everyone wants to be loved,” but in some of our imaginings, A.I. can and does see us and hold us in mind in the precise ways we crave but can never satisfy.

Neal Stephenson’s novel The Diamond Age is set in a nanotechnology future branched off from Victorian tech and East India Company–style globalization. It features an interactive book, The Young Lady’s Illustrated Primer: A Propædeutic Enchiridion, in Which Is Told the Tale of Princess Nell and Her Various Friends, Kin, Associates, which falls into the hands of an impoverished four-year-old named Nell. The lady’s primer becomes Nell’s teacher, mother, friend, and confidante and ultimately guides the child from a vast, global underclass into becoming a person of great account. You read the novel, yearning to be read by it in turn as Nell’s book reads and loves her. A key subplot is how to deliver this kind of fruitful cherishing at scale to all the world’s children.

So many fictional “deep A.I.” are the family, friends, and lovers we can neither have nor be. The vast, native A.I. in Iain M. Banks’ series of Culture novels are wholly believable intelligences that effortlessly task-switch between interstellar travel on ships the size of planets, terraforming, poetry, multidimensional galactic warfare, and reminding the protagonist that it really is time for him to pee. Ian McDonald’s novel 2047: River of Gods has sentient A.I. inhabit and extend the personas of Hindu gods while being ruthlessly hunted by “Krishna Cops.” Becky Chambers’ novel The Long Way to a Small, Angry Planet includes a ship’s A.I. called Lovelace, who truly loves and is loved. When something happens to her, it really feels like someone has died.

What they all have in common is that, for fictional purposes, these A.I. feel real—not only as real as the other characters in their stories, but latently true in the way of things we need but that don’t yet exist. They’re real enough to yearn for the way some people longed to live in James Cameron’s Avatar — with that almost-able-to-touch-it sense of something that isn’t here but really, really should be.

Could This A.I. Really Happen?

Unlike in fiction, native A.I. is not going to spontaneously emerge from lots and lots of processing. (The only things produced by petabytes of pointless computation are heat, carbon dioxide, and use-case-hunting blockchain applications.) Nineties-era cyberpunk aside, sentience will not be an emergent property of networks, however big and complex those networks become, just as you don’t create life by boiling the petri dish or calm a baby by shaking it. The “hard problem” of consciousness will not spontaneously solve itself some dark and stormy night down on the server farm.

The software required to kick-start sentient A.I. is beyond our current imagining, however generative we believe algorithms may become. And on the engineering side, even the kinds of A.I. we currently conceive of will require processor-cooling capabilities based on a whole new form of silicon. The speed and scope of computation ultimately needed may require parallel processing of such scale that it’s basically quantum computing. We’ll probably colonize Mars first. (Or, more realistically, we’ll do it with A.I. And given the impossibility of faster-than-light travel and how it all tends to go horribly wrong on generation ships, the only form of life we’ll ever get beyond our solar system will be machine life.)

Technology aside, what would we need to get us from populating menu trees to block-building entities that go deeper than mere digital assistant personae? Ask a fiction writer.

Catherine Brady’s Story Logic and the Craft of Fiction is a guide to building the character iceberg that lies below the waterline. Stories that stick with you succeed because they both communicate and generate meaning. Brady says they do this by having have a high degree of subtext to text—a ratio of about four to one. Writers must “learn to be precise in generating meaning that is not stable and never entirely leaves the realm of uncertainty.” Real, created characters only work when the meanings of what they say and do are mostly but not fully believable and are unstable within an intuitively determined range.

How utterly, utterly fascinating would it be to probabilistically generate coherent words, actions, symbols, and traits — to generate meaning — computationally and in real time? And for it to somehow work by joining one unconscious to another just as art transmits feeling “from one man’s heart to another’s”? The possibilities for new means of generation and expression that multiply meaning, instead of asset-stripping it, are already here, as Thomas McMullen described earlier in this series.

There’s no cheap or quick way to build an iceberg A.I. that thinks and feels by itself. And by “fail,” I mean we’ll be living in a world peopled by the mechanistic characters and clunky scripts of a Star Wars: Episode I/II–era George Lucas, or, as my brother once put it, inhabiting the wrong dystopia, peppered by “the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together.”

Yikes. Pass the blue pill.

Why Are We Pessimistic About Our Future?

But what about an A.I. future that serves us, for a function of “us” that doesn’t massively privilege Alphabet’s shareholders or the Communist Party of China?

A year ago, I was working on some near-future science fiction shorts for a report on the next 25 years of the internet. The report’s compilers were struck by how pessimistic most in the global north were about the future, yet people in developing countries couldn’t get enough of connecting technologies and were still alive with what sociologist Kieran Healy calls technology’s magic and delight.

I wrote a fictive news article about an A.I. called Mishee, who’s recognizable within today’s tech but not achievable within our current market structures. She could only be created by someone outside of them who still shines with love for tech’s magic and delight:

Falling for Mishee™

Mishee; is she a neutral intermediary, a digital agent, or perhaps even a guardian angel? I put these questions to Mishee’s creator, Akwete Armah, the Ghanaian technologist and entrepreneur. We met for an early coffee on the roof terrace of Vida e Caffé in Labone.

“I think of Mishee as a best friend, or perhaps your wiser twin,” Akwete says with a slightly wistful smile. “Mishee knows your weaknesses, but she doesn’t play on them. She asks you simply how much you want to share—”

“Or how little,” I interrupt.

Mishee forces the platforms to negotiate for our personal data, and for many users, that means blocking its transfer. “There is no one-size-fits-all,” Akwete says patiently. “Some people like to share more. And we change as we move through life. Mishee makes the platforms listen.”

“She empowers users,” I say.

“Traditional digital assistants work to keep you in their ecosystem and buy things. Many of my friends resented their devices and distrusted the platforms,” Akwete says. “We had lost the feeling of happiness you get when technology just works. Mishee is a voice-operated interface for everything from your TV to social media. She talks to you like she’s human, and she’s on your side. Mishee gives us the promised smoothness of technology, without the hard sell.”

“You sound almost evangelical,” I tell Akwete. “How did an African invent something the whole world needed?”

“That’s easy,” she smiles. “People of the global north had fallen out of love with the internet. But we in Ghana still held its joy and its possibility. In the Ga language, ‘mishee’ means happiness, or even delight. I created Mishee to share our feeling of delight with the internet, and I think she has.”

Without thinking, I lean across the table toward Akwete. “I think I am a little bit in love with Mishee,” I say, my voice trembling unexpectedly. Akwete sits back, her hands demurely in her lap.

“You are not the first to feel this way,” she laughs, but kindly. “No one liked Big Brother. But Big Sister? We have a different kind of feeling for her.”

The singularity, to the extent that any emerges — unevenly distributed — will be intentional. If it is to give us what we most deeply yearn for, and not what coders and bean counters can quickly cook up to monetize attention and loss, then it needs to start from the toughest problems and questions we have, and not from revenue-based use-cases.

Jenny Judge said at the beginning of this series that there’s “no principled reason why A.I. developers should rest content with the satisfaction of our most transient desires.”

She’s right: “Imagine if the immense ingenuity of Silicon Valley were instead channeled to support focused, sustained, and shared attention — algorithms that would nudge us toward the hard human work that we already want to do, instead of pulling us away from it. That sort of frictionless future would, I think, be something worth having.”

As I’ve written in “How to Cope with the End of the World,” we can get there. We just have to fix capitalism first.

Imagine if, as well as using A.I. to truly augment and extend our brains, and not to entertain our attention spans to death while selling us stuff we don’t need, we put it to work building intelligences far beyond ours and modes of feeling fiction writers can only hint at? Imagine if, instead of having A.I. as pets — or even, ultimately, vice versa — we made wide and disciplined space for the development of a species-companion, a peer that supports us as we work to be our best selves?

Living in the machine might be be dystopic, but it might also have the potential to respond to humanity’s yearning for a love few of us can put a name on and none of us is yet able to give. The first step is to imagine it.

Maria Farrell

Written by

Irish writer based in London. Tech policy, possible futures, politics. @mariafarrell

Living in the Machine
Living in the Machine
Living in the Machine

About this Collection

Artificial intelligence and automation outsources even more of our cognitive functions to machines. What does this mean for art, for relationships - even for our connection to a higher being? What does it mean to be human in the age of the machine?

Artificial intelligence and automation outsources even more of our cognitive functions to machines. What does this mean for art, for relationships - even for our connection to a higher being? What does it mean to be human in the age of the machine?

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade