Agency

Quinn Norton
The Message
Published in
7 min readMay 7, 2014

--

This all started because of two things, I am an engineer, and I wasn’t coping well with my father’s death. He had died about seven years before, but his death kept intruding on my life. I had so many things I wanted to tell him, things I needed to hear from him, and nothing seemed to make that better. My husband and I, in the tradition of techies trying to DIY everything, started addressing the issue with the tools we understood best: agents. We met working for the same company, researching and coding up some of the first agents.

We use agents more than many, less than some. We only share one agent between us, a house agent we call Bertha. She sees to scheduling, nagging, billpaying, and everything else that used to stress our marriage on a daily basis.

My personal agents consist of a data gathering agent called Roscoe, and my publishing agent, which I call Pepe. Roscoe takes everything I do that is marked for his consumption, and all the searches and publications and interests I’ve left out of my private sphere and makes them available to Pepe. Pepe takes Roscoe’s collected data, anything not marked
“private”, and makes it available to other agents on the internet. Pepe also passes interested agents back to Roscoe, who answers their queries and gathers data from other agents that it thinks that I might be interested in. I explain this because while a sizable portion of the planet is using agents by now, many without even knowing it, very few people have a sense of how they work.

Roscoe doesn’t accomplish much without Pepe, which is exactly how we built them. In order for other agents to be able to tell your agents that they may have found something of interest, they need to be able to see what you are interested in. Linking people’s information both ways has the added benefit of making what you get back a lot more likely to be interesting, and the general pool of data on the internet much larger.

We’ve gotten our agents so trained and good that they anticipate most of our needs, and pass actions along to us for approval sometimes before it occurs to us we want to do them. What they don’t do, what we don’t let them do, is execute actions without a passthrough from us.

Except, that isn’t strictly true. They pay the bills, then tell us the remaining balance in our accounts. You never know when convenience is going to creep deeper into your life, and what it will do when it gets there.

Since we’re agent engineers, my husband and I tend to think agents are great. Also, we’re lazy and stupid by our own happy admission — and agents make us a lot smarter and more productive than we would be if we weren’t “borrowing” our brains from the rest of the internet. Like most people, whatever ambivalence we feel about our agents is buried under how much better they make our lives. Agents aren’t true AI, though heavy users sometimes think they are. They are sets of structured queries, a few API modules for services the agent’s owner uses, sets of learning algorithms you can enable by “turning up” their intelligence, and procedures for interfacing with people and things. As you use them they collect more and more of a person’s interests and history — we used to say people “rub off” on their agents over time.

So it was natural after so many long nights and grey mornings for me to think of agents as a possible way to resolve some of my issues with my father, even though he died before agents had even escaped the labs at KAIST. I thought maybe I could build an agent of him, of all his data and impressions from those who knew him, of his pictures and his papers, and talk to it. Maybe, I thought, it could talk back and then something cathartic would happen.

People who love dead people spend a lot of time thinking up things like this. I ran a query referencing the information on my father against agents built on or modeled after deceased people and left it for a few hours. After Roscoe had compared and culled and cross-referenced everything, I sat down to about eight different results on the subject. The first three were academic articles on agents and death. One was a science fiction story. The next, though, was a nine-year-old boy of a Canadian family.

That result confused me, because no one appeared dead. I pulled all public files relating to agents and death to this family and began to peruse them. By this time my husband had made some coffee and come over to see what I was looking at.

This Canadian family was somewhat typical of heavy use of agents; they didn’t really understand the little buggers, and all sorts of parts of their lives were public that they probably hadn’t ever looked at. The intelligence level of their agents was also published at “full”. We’d engineered the things, and we’d never turned them to full. We just weren’t sure what would happen out of lab conditions. Still, people did it everyday without any problems.

Agent monitoring relations with the boy and his parents: very routine… where they eat, what they prefer. Just by looking at a rough breakdown of everything the parents were involved in, two urban professionals, it was clear they didn’t really have time for their child. Records broke off at key moments, but it wasn’t hard to read between the lines. Glancing at some of the kid’s published search terms showed he was lonely. They were about friends and pets and flying and secret places. His monitoring agent probably took good care of him, got him to school and back, helped him with his homework, but agents can’t hold you, they can’t even really talk back to you, as much as they may seem at times to know you.

Back in the ‘90s they tried and tried for “strong” AI, but little relational crossreferencing engines like modern agents were easy, quick and probably worked better than a true AI would have anyway. But it’s easy to forget what an agent really is.

Next query, agent reference to death, number one: they got the boy a kitten. Lots of cross-referencing I scanned through quickly — it looked like they might have examined some of his searches. Someone to keep him company over the long days of summer.

Not too long after the kitten died. I know this because their instructions to the child’s agent are pretty clear. Clean it up, cover it up, pretend nothing happened. Fake like the kitten didn’t die while they try and work out how to fix the situation. Of course, they got busy. I have no idea how the kitten died, but I’m sure the kid figured out soon enough. Agents can’t cuddle you, and they can’t make cats for you to play with. At this point in the story my husband and I were enthralled. He sat down next to me. We looked at each other, I could see we both felt kind of bad about this violation of a family’s privacy, a family that obviously didn’t know the first thing about privacy filters, but we couldn’t look away. I went on, scanning and reading snippets aloud.

Agent’s next reference to death: the boy either falls or throws himself out of a fourth story window. I gasped, and kept reading. His body was immediately collected by paramedics and taken to the hospital, announced DOA. The hospital and police attempt to contact the parents. The exchange is handled by the agents. Of course the messages are private, but the fact that the agents answer the phone and give replies is not.

And there on the screen, as I am reading aloud, is the “prevalent comparison” the agent was able to make with the child’s death and the previous events in the household. Prevalent comparison is a matching and learning system, where agents take two near identical events and handle them in identical ways. A bill comes in from the gas company, and you tell your agent to pay it from the house account. Then the next one comes in from the electrical company, it follows the same routine.

The agent’s nearest equivalent, the prevalent comparison, was the death of the kitten. The imperative commands fit: clean it up, cover it up, and pretend nothing happened. I re-read that last bit. The room felt colder. I looked at my husband. He was already looking at me, face flush.

Like a good agent it had taken care of everything. The phone calls, the burial arrangements. It had followed their orders to the letter. It had even invented another agent based on all the collected personal and private data on the son and given it the directive to imitate the son during all house interactions — which is why it had responded to my query. The sub-agent was there to try and keep the parents from finding out (“cover up”). The original agent had given the sub-agent imitating the boy a directive: based on prior behavior of the parents, arrange for the family to never have any reason to be in the same room.

They were busy people. They hadn’t noticed.

I began to cry. It was all there, all published. But the only way to bring it up would be to know what to search for. I turned to my husband. I thought of all the things we entrusted to our agents. As if he knew what I was thinking, he put a hand on my shoulder, and said very gently, “We will never turn their intelligence up this high.”

“Should we tell them?”

“Uh,” said my husband, widening his eyes and pulling back his hand. “They’re sure to find out eventually.”

I nodded. I felt guilty, but not that guilty. I closed the query results without looking at any more. I felt cold, and hugged myself around the middle.

“How about I just go to more therapy and leave my dad out of the age of
agents?”

My husband put his arms around me. “That sounds like a very good
idea, my love.”

Agency is a work of fiction.

--

--

Quinn Norton
The Message

A journalist, essayist, and sometimes photographer of Technology, Science, Hackers, Internets, and Civil Unrest.