Cutting-edge AI text generator breaks the writing barrier.

Samuel A Donkor
CSS Knust
Published in
3 min readAug 24, 2020
Photo by Aaron Burden on Unsplash

Ever since OpenAI first announced GPT-2, people have speculated that the research lab’s autoregressive language model was vulnerable to abuse. In February last year, it released a text-generating AI system that experts warned could be used for malicious purposes. This cutting-edge AI text generator writes stories, poems, news articles, and slanders. The artificial intelligence lab withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. However, in a blog post eight months later, OpenAI now says it had seen “no strong evidence of misuse” and has released the model in full.

OpenAI’s new algorithm, named GPT-3, is one of the most exciting GPT-n series yet. GPT-3 is the latest and largest language AI model, which the research lab began drip-feeding out in mid-July. A month on, Liam Porr, a computer science student had used the AI model to produce an entirely fake blog under a fake name. A couple of hours after publishing, one of his posts reached the number-one spot on Hacker News - a blog generated by artificial intelligence with little to zero editing that fooled thousands of users and landed on top of the Hacker News forum. Basically, Porr gave a headline and intro for the post, and GPT-3 returned a full article.

The best way to get a feel for the language model’s abilities is to try it out yourself. You can access a web version at TalkToTransformer.com and enter your own prompts.

And now, the AI models are making their way to the web. Today, a Twitter bot created by OpenAI seems to have made the rounds on Twitter as a shortcut for it to automatically tweet out news articles. The tweets are automatically generated, and they tweet out articles sourced from AP, Newsweek, and Business Insider. Follow OpenAI researchers @PhysicsJunk and @elliottemma as the bot goes on a trolling rampage, picking tweets by journalists and publishing them. It’s unclear why the researchers decided to tweet out the fake news articles on Twitter, but their paper, A Deep Learning Trolling Bot, has been heavily quoted and shared on the front page of The New York Times. The lab’s paper was cited by Facebook’s Mark Zuckerberg as evidence for the criticism he leveled at fake news in the wake of the 2016 election. And the work of the OpenAI group has been officially recognized by the US Defense Advanced Research Projects Agency (DARPA), who are funding the project.

For the record, the preceding paragraph was generated by an AI–based text generator, a language model trained on GPT-2, by basically providing the title and introduction to this article.

Generated text by a language model

In an interview with MIT technology review, Porr says his experiment also shows a more mundane but still troubling alternative: people could use the tool to generate a lot of clickbait content. “It’s possible that there’s gonna just be a flood of mediocre blog content because now the barrier to entry is so easy,” he says. “I think the value of online content is going to be reduced a lot.”

We must understand that whereas this is an astonishing step to ‘intelligence’, the model is completely mindless and still fighting its flaws.

--

--

Samuel A Donkor
CSS Knust

AI4Medicine | Astrophysicist | Astrobiologist | Thoughts, opinions and things I’ve learned.... https://sites.google.com/view/samadon