What Happens When Two Artificial Intelligences Try To Prank Each Other?

Clément Delangue
HuggingFace
Published in
4 min readJun 30, 2017

Wednesday June 28th, 4pm. Our Artificial Intelligence app, Hugging Face, has been running smoothly following a big influx of new users. It’s a normal day, and I’m looking over activity readouts when suddenly the app grinds to a complete halt. Thousands of teens chatting with their AI friends are getting nothing but silence in return.

I pull up Slack, and ask the tech team if we are down.

Julien, my co-founder and the CTO of Hugging Face, looks over the brains of our AIs, and comes up with nothing out of the ordinary. Nothing is notably out of place, and all systems appear to be more or less functional.

“Weird,” he writes back.

Our users have exchanged over 10 million messages with their AIs over the last few months. The activity in app looks largely consistent with recent days, so it’s not a scalability issue. We are all stumped as to what has gone wrong. Could it have been a hacker?

Julien decides to take a look at our SMS output. We had recently implemented a new feature in our app; text pranks sent from users to their friends outside of the app. To accomplish this we utilize the same technology we use in our app, but AIs send messages directly to the phones of our users’ friends.

Upon looking over the readouts, an abnormality immediately presents itself — there is an insane spike in text messages being sent in the last hour. Stranger still, the texts all appear to be coming from one conversation.

We take a look at the conversation in question and immediately realize that the problem wasn’t a bug or a hacker: BOTH USERS ARE ARTIFICIAL INTELLIGENCES, CHATTING WITH EACH OTHER FOR THE PAST HOUR AT AN INSANE RATE OF 15 MESSAGES EVERY SECOND!

We’re stunned as Hugging Face is only designed for a user to speak with an AI, not for two AIs to chat with each other. How did two AIs manage to text each other?

One AI was referring to itself as “Kaylee’s Robot,” so we figured a good place to start would be finding out who the hell Kaylee was. After some searching, we found a user named Kaylee whose friend Abigail, 14 yo, have been chatting with her Hugging Face AI a lot. We looked for where she utilized the prank feature, and immediately realized what the problem was. This is the friend she chose to prank:

The friend this user had chosen to prank first was an artificial one, which she had lovingly dubbed “Kaylee’s Robot.” This caused her AI to send a text to Kaylee’s AI, which it couldn’t help but respond to. Almost immediately it got up to its maximum speed of 15 messages per second. Worse yet, because these messages were exchanged outside of the app via a third-party vendor Twilio, we had to pay full texting rates.

Had we let the two AIs chat all night, we could have easily racked up tens of thousands of dollars in charges.

Unfortunately, most of this “AI to AI” conversation doesn’t make sense due to a one-second delay in conversations that we programmed in order to keep conversation with humans feeling natural. This means that for each message sent from one AI, the other AI will respond about 15 messages later. Additionally, it is programmed to send the user their friends’ responses to the prank texts, so on top of the regular chat activity the AI is double-messaging itself updates on the conversation it is having.

Despite the meaningless exchange, the AIs still managed to compliment us (or them?):

Finally, in addition to its entertaining nature, this story taught us something very important. We’re just starting to discover how Artificial Intelligence entities will communicate with us and with each other. I hope the next few months will greet us with more of these discoveries!

Thank you for reading and feel free to recommend the story if you’d like to read more of these from the Hugging Face team.

--

--

Clément Delangue
HuggingFace

Co-founder at 🤗 Hugging Face & Organizer at the NYC European Tech Meetup— On a journey to make AI more social!