The Frame

Michael Cook
6 min readOct 1, 2015

--

Sainsbury’s, a national supermarket chain in the UK, are running one of their ‘spice up your daily calorie consumption’ ad campaigns again, this time about small changes to everyday meals. Some of them make sense, while others sound a little more out there, I guess intentionally to make you talk about them, which I’m going to do right now. One of them in particular caught my eye as sounding a lot like the kind of thing a Twitterbot might spew out:

I tweeted a photo of the instant coffee example because it made me chuckle, but it also shows how important context is when it comes to new ideas and creativity. It’s a good discussion point for why we don’t trust or accept ideas from software so readily. A big national supermarket is showing you this, so while it might seem a little odd you’re more than likely going to assume there’s some sense to it. If a Twitterbot were to come up with this, however, we might laugh it off more readily.

A few hours later, someone replied to my tweet, defending the caffeine-tastic recipe idea:

At first this completely satisfied me as an explanation. Clearly, Christopher knew about this tip already, and he totally deserves the benefit of the doubt (it’d be a very strange thing to lie about, in any case). But as I re-read the tweet I realised I didn’t really have any reason to believe the explanation particularly, and the tweet itself doesn’t really tell you why instant coffee works — just that it does. I was trusting Christopher for the same reason I trust Sainsbury’s, or BBC News, or my wife, or any other number of people: because they’re just that, people.

Smoke And Mirrors

Earlier this week I was at Videobrains, a monthly games event in London that hosts talks from anyone and everyone. I gave a talk titled Smoke And Mirrors In The Age Of AI for this month’s theme of deception. In it, I talked about how AI in games is generally designed to deceive us and give the illusion of intelligence, and this plays into our everyday interactions with technology, where we are encouraged to personify our technology (hey Siri!) and often do it whether we’re encouraged to or not (my phone hates me). It’s the alternative way to pass the Turing test — not by being more clever, but by getting really good at bluffing.

There’s nothing wrong with this approach to AI, for the record, but it’s more fragile than the ‘hard AI’ dream of science fiction, where software is truly intelligent and can flexibly understand and use human language, for instance. It relies on a delicate balance between how appealing you make the illusion, and how close you let the observer come to touching it. The more trickery you use, the more the user is encouraged to interact with your system and trust it, and the harder the fall is when they realise it’s a fake. People can get upset and angry when their assumptions are shown to be false.

I jokingly retweeted Christopher’s explanation of the instant coffee when I saw it, and compared it to a practice in computational creativity called framing. What this boils down to is getting the software to explain what it’s done, describe decisions made, in order to try and convince the observer that it did something creative and intelligent. So instead of simply painting a picture and hanging it on the wall, the software writes a little paragraph to stick alongside it. Here’s what inspired this. Here’s why I used this style. Here’s what I think of the result. This practice has been supported by several CC researchers for some years now — you can see ANGELINA’s attempt at framing in our Ludum Dare entries, for instance.

The interesting thing about the use of framing throughout CC is that so far every single use of framing has been truthful. We’ve discussed a lot about how framing could be fabricated, in theory, but no-one’s ever done it. I think it might partly be hesitance about the potential negative impact if anyone ever found out, but whatever the reason is, I think it’s worth exploring. Not because I think software should deceive people, but I think software should have a way of creating an even playing field when it comes to being accepted and trusted in creative scenarios.

Lie Fidelity

Getting a piece of software to ‘lie’ about how it came up with a 140 character tweet is not a hugely impactful lie, after all, but it might help the software capture that elusive quality that both Sainsbury’s and Christopher enjoy so much — the benefit of the doubt. Even very small advantages, like giving the software weasel words to use in its explanations, can have a huge impact. Consider the tweets of Appreciation Bot, which leans heavily on weasel words to sound convincing. Here’s one pair:

Before I go on, I just want to stress that Appreciation Bot is not a genius work of AI or anything — it uses cut-up templates and ConceptNet, that’s all. But the templates are carefully designed to avoid being too specific. The most important idea, not the only one, for instance. Then we select a larger general concept like a cupboard from ConceptNet, and a fact we know for sure, about containment. We don’t get specific. Finally, conclude with a vague statement of opinion that can’t really be refuted.

Appreciation Bot is a sort of two-in-one bot as far as computational creativity goes. The tweets it produces are both the artefact and the framing information simultaneously, so it’s a confusing example. But I think it’s a rare case of a computationally creative bot that intentionally bullshits people.

Show Your Working

Framing doesn’t have to be a lie, but it definitely can be. Whether it’s true or not, I think we should write AI that explain themselves more often. Over the last 18 months I’ve really come around to the idea that AI is largely a sociological phenomenon — a label that we apply to certain kinds of technology, if we decide they deserve it. Your AI can be as smart or as stupid as you like, it can learn how to play every Atari game under the sun, but if the public are unconvinced then it’s just a really nice piece of software.

I think it’s telling that some of the most successful linguistic bots work in mediums that rely on interpretation, even when humans use them. Euphemism (WikiSext), puns and satire (Two Headlines) and metaphor (Metaphor Magnet) to name just three. These are things that are not meant to be explained, you figure it out yourself because that’s how the medium works. That’s why these bots don’t need framing — because, in this case, neither would humans.

Outside of these mediums, though, I think framing can really help add value to bot statements (or any other kind of AI) where it’s important that the user believes or trusts what they’re being shown or told. I think we need to stop considering framing as just a little bit of text with an artist’s statement on it, and think about framing as a broader, wider, bigger concept — the idea that when an AI plays the Imitation Game, it should play to win.

--

--

Michael Cook

Researcher in automated game design and computational creativity. I'm the creator of @angelinasgames, founder of @procjam and I make games @cutgarnetgames.