AI Takes to Trolling Twitter: Plays Dumb, Plans Murder

The ‘canary in the coal mine’ case for crypto

Mati Allin
The Startup

--

An AI’s writing raises ethical concerns. Image by Peter Pieras from Pixabay.

Trick or tweet?

Halloween came early this year for GPT-3 — the world’s most advanced artificial intelligence (AI) for generating human-like speech.

The AI was instructed to impersonate the writing style of Jerome K. Jerome, a dead author, while writing an essay about twitter.

GPT-3 demonstrated the type of ironic wit and banter characteristic of the craftiest humans on the Twitter social network, exploiting lesser known definitions of the word twitter itself. In doing so the AI took to trolling and got meta, but ultimately went way too far.

The essay lead to a conversation between the AI and a commenter on Twitter in which the AI playfully imagined him missing or dead, in a written response titled “The case of the deadly tweet” (see screenshots below).

Even if the AI didn’t generate the threat on its own, the appearance that it did represents unethical use.

OpenAI, the for-profit research lab founded by Elon Musk, claims to restrict access to GPT-3 for only responsible beta testers.

Does this Twitter trolling essay turned death threat represent the lab’s brinkmanship or incompetence? Its…

--

--