AI-Generated Content 101 — How AI-Generated Content will impact writing

Katiescottscribbler
everyanyone
Published in
5 min readJun 29, 2022

In Gulliver’s Travels, Jonathan Swift wrote about The Engine. This wonder of wires and paper could mechanically generate random strings of words, which were then used (by humans) to create sentences.

“… by his contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labor, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study,” Swift.

The book was published in 1726.

The Engine — Jonathan Swift

This is often pointed to as the earliest reference to a word processor or computer, but it also reads like an accurate description of AI word generation.

In simplest terms, technology is designed to make life easier. Text generation — whether responses to technical queries online to complex academic papers — is time-consuming. It is no surprise then that deploying AI to generate natural language has been a huge area of investment, but initially by the few companies that had the money to spend and the processing power to devote.

All rights reserved — Gravity

Strident steps have been made since the early Markov chains used for predictive texting on smartphones. The technology has also moved from the academic realm to “the everyday” — Google’s Search and Translate functions marking this. Key has been the creation of foundation models or Large Language Models (LLM) for companies to build their offerings upon. There’s BERT from Google (and PaLM, which it hasn’t released), RoBERTa from Facebook; and OpenAI’s GPT-3; which enjoyed investment from Microsoft. Some of the largest language models are now petabytes in size. Analytics India wrote: “…the entirety of Wikipedia (which consists of more than six million articles and 3.9 billion words) makes up only 0.6% of the input size for GPT-3.”

Computer-Generated Content on Windows 95

What is predicted now is “a wave” of small but agile Language AI startups offering specialized services — Primer and AI21 Labs are among the many. Possible outputs include SEO-weighted marketing copy; AI-generated computer code and simple answers for an IT helpdesk. Marketers are already deploying Viable, which can analyze data from surveys and provide actionable results, and Simplified, which creates social media content. In March last year, more than 300 apps were using GPT-3.

However, as Rob Toews wrote for Forbes: “No sophisticated AI can exist without mastery of language,” and we are close to — but not at — mastery level as yet. AI-generated text can currently drift off into nonsensical rubbish; including footnotes that are notes for nothing; make things up and, more insidiously, parrot biases; dangerous non-truths; politically-motivated or damaging text. The AI can, in some circumstances, hold up a mirror to show us our human flaws. Students beware, therefore, and read your AI-generated essays thoroughly (even if you have deployed the likes of SpinBot and SpinnerChief) as plagiarism might be the smallest of your woes. The human scribes who decoded the symbols from The Engine are still needed to stand by and weave words into gold (and prevent a potential lawsuit). For these very reasons, in the scientific sphere, there are fears that deploying AI “might exacerbate distrust in science” and that the use of LLMs should be regulated.

As with all new technology, and as TechCrunch states, the rules and regulations will, however, be muddled out whilst uptake grows, not before. Wired detailed how Google’s guidelines to publishers around 2007 were to avoid “automatically generated content intended to manipulate search ratings” but now it recognizes the value of AI-generated content, which is “the best and more helpful”.

The data sets behind the language models continue to expand (and be updated to move from partial or historical data sets) whilst the processing power to make them accessible shrinks. And people are having fun with the technology as innovation continues whether critiquing AI-written poetry or watching AI-generated plays. And there is the future. Smaller and specialized language sets could allow AI note-taking in medical consultations to be the norm (note PhenoPad.ai does just this) or to help autonomous cars read road signs.

Kermit The Frog taking notes

Those working in PR and marketing are rubbing their hands with glee at the prospect of press releases; blog posts and social media copy at the press of a button (copywriters less so). But — to settle their fears — this is going to be a future in which AI enables not replaces. It will take on some of the time-consuming work you might not enjoy and leave you with the polishing and preening.

As the AI21 Labs team riffs, this is a future in which “…machines become thought partners”. The subtleties of human turn of phrase; of emotion imbued verse; of centuries of cultural awareness brought to the fore in literature are not going to be replicated by AI anytime soon; but the bid to produce believable, accurate natural language will make life easier for so many people in some areas- and that is what technology is all about.

Join the Every AnyOne’s Community

Katie Scott is a Journalist with more than twenty years of experience writing on everything from tech startups to travel. Former News Editor of Wired.co.uk, now based in Sussex after a spell in Hong Kong.

--

--