Robots Telling Stories — Will AI Improve Your Communications?

Simon King
The Startup
Published in
8 min readOct 9, 2020
Robbie was struggling to adjust to working from home.

In the mid-1700s you knew you were somebody if you had an exclusive audience with a robot. European nobility, according to the stereotypes, were usually fey, bored and yearning for novel distractions (when they weren’t at war with each other). Although some tried their best, acting as patrons of the arts and sciences, fancying themselves as refined polymaths. Both groups, for different reasons, adored an automaton.

The 18th century craze for man-made creatures was one of many endeavours that drew science, art and religion together. Priests and scholars debated what it meant to create something that appeared to be alive. For those unfamiliar with automata, they were elaborate, complex, largely clockwork creations (apart from the fraudulent ones with people inside them) that imitated living things. Animals were popular (particularly De Vaucanson’s Digesting Duck), but naturally so were humans. Lifelike in appearance (though rarely to scale), the makers built them to show off their ingenuity and technical skills, sometimes to reflect on the nature of life, but also to amaze and gain favour with influential people. Some of these automata still exist today, and still amaze.

The Jaquet-Droz Automata is a triptych of automata. Created around 1770, they are three child-like figures, each engaged in creative pursuits; a musician, a draughtsman and a writer. The Writer has (present tense; technically it still works and can be seen in Switzerland’s Musée d’Art et d’Histoire) its clockwork input set and it will then write, including dipping its feather quill in real ink, lines of ‘handwritten’ text as programmed.

Why the history lesson?

300 years later, we still believe that, whilst our messages can be noted, displayed and conveyed in ever more impressive ways, it is the human that needs to create and input the original message. From moveable type to word processors to vlogs to podcasts, humans are simply putting their thoughts into words, with varying levels of sophistication, and those words are then consumed by a human audience in some way. The medium may change but the message still has a human origin.

In their book The Future of the Professions, Richard & Daniel Susskind explore the likelihood and implications of automating so-called white collar jobs. Far from only threatening the jobs traditionally replaced by automation, artificial intelligence is very likely to replace ‘thinking’ jobs. And relatively soon. The history of automation, from 19th century cotton mills to 1980s car plants, has been primarily one of replacing human labour. Machines that were stronger, more reliable, faster and more robust (and non-unionised) than humans meant the jobs of dozens could be done by one or two. As AI* has caught up with mechanical automation, a new tranche of jobs, previously thought largely immune to automation, is being threatened.

GPT-3 is a new system created by OpenAI that takes human-defined parameters (what might be considered a brief) then scours the internet for relevant information, language and styles, and creates original text to order. It ‘wrote’ this recent essay for The Guardian, justifying its benign intent (well, it would say that, wouldn’t it). It’s worth noting that the piece still needed sub-editing, but not in any way that would have differed particularly from a human-authored piece.

Language and its creative use is one of those skills considered uniquely human. The areas of the brain (chiefly the Wernicke’s and Broca’s areas) most associated with language formation (rather than expression) are not found in animals. It’s the sort of thing that sets us apart from our animal cousins. The origins and complexities of language have kept anthropologists and neuroscientists busy for years. It’s also the focus of many a tech project seeking the next level of AI capabilities and human-like behaviour. The ability to understand language, and effectively converse, is one of the key measures of successful AI. The standard that AI researchers and engineers target, still, after 70 years, is the Turing Test. In layperson terms it states that a true artificial intelligence is one with which a human can interact and not be able to tell whether it’s a machine or a human.

There will be other GPT-3s to come, more complex, perhaps more specific in their purpose. Technology’s progress might seem rapid, but it’s gradual so that you only notice it when you look back. It’s also, like evolution, not a straight line. It’s so hard to predict what new technologies will actually end up doing even the makers don’t know. Humans, the mass population of consumers and users, ultimately shape its course. In 2007, the new iPhone didn’t change the world. Over the course of years the apps, the tools, the things that made use of this powerful rectangle people carried about; that’s what changed the world. The change was a combination of technology, the behaviour of people, and yes, their manipulation and exploitation too. It is, however, the application, not the technology itself, that changes things.

Don’t Panic.

What if, just around the corner, there’s an AI that can write copy that is human-like, and that fulfils the fundamentals of good communication — concise, clear, considerate, convincing? That can comb through terabytes of emails, marketing messages, external conversations about a company or issue, even spoken exchanges, and create content accordingly. More, it won’t question the task it’s being given, it won’t make financial or ethical demands, and it will be more reliable than a human. The implications are massive, not least for those in areas like PR and journalism (and by extension wider society), where copy leans more towards prose than poetry.

In business, this type of AI has the potential to radically change communications. And like almost every application of AI, there’s hope and there’s threat. Removing the human element raises concerns. AI is only as good as the information it’s given. If you give it biased, unethical input, it won’t necessarily see it as such and will produce biased and unethical, albeit well-written, copy. Then again, most communications professionals will be familiar with the pains of trying to edit the random thoughts of a leader into something coherent. Or reducing the complexities of some arcane process or role into something digestible to non-experts. Or livening up some uninspiring, or even depressing, announcement. AI could be here to remove that suffering.

Doesn’t this mean that communications professionals are just the next in line for the AI guillotine? Will they go the way of lamplighters, switchboard operators and aircraft listeners?

Over the last 200 years automation has focused primarily on occupations sometimes referred to as the Three Ds — dirty, dangerous and dull. Whilst few offices could be considered dirty, they can be dull (yes, really), and office-based jobs can be dangerous to one’s mental health. AI is undoubtedly set to reduce, possibly replace many process-driven roles that it is simply better at — more accurate and quicker. That includes things like accounting, drafting or checking legal documents, and now, perhaps, writing copy . The very reasonable argument goes that removing these tiresome processes from human responsibility frees the lawyer, accountant or marketer to focus on more problem-solving and creative tasks. The complex things we humans are (for now) much better at.

Some reading this will already have embraced an element of automation of the writing process, through platforms such as Grammarly or the autocomplete function in the likes of Google Docs. This is a part of the gradual process that has taken us from typewriters to spelling and grammar checks to voice-to-text systems. And just as shorthand typists or dictionary salespeople had to adjust, so will communicators.

How long before you input a jumble of information, define some parameters, and have a press release or company statement ready in seconds? How long before you don’t have to input anything at all; just request the output because the AI monitors all the relevant data throughout an organisation anyway? At first this might be the preserve of skilled, trained users, like Photoshop. But how quickly have we all learned to adjust photos on our phones in a way few could a decade ago? Those filters? Those lighting adjustments? That’s AI (of a type) at work, reading the image, ‘understanding’ how light or movement works. Will everyone have access to their verbal Photoshop to tweak and improve their written content without actually knowing what they’re doing?

This is not about speculating on the trajectory of AI or the redundancy of jobs. Rather, it’s about what happens to a role that focuses on getting messages on behalf of one entity heard and acted on by another. Yes, communications professionals would probably be well advised to brush up on their tech skills. They also need to look at what they do, not how they do it. Communication is being disrupted. Not just the platforms but the fundamental nature of communication.

Time to reflect.

In business, AI is already grabbing things like culture and engagement ‘temperature checks’, talent allocation and risk analysis. Now it’s about to write as well. This means that communicators need to think hard about what they do in order to prepare for a time when they don’t have to do these process-driven, analytical tasks. When they’re freed from focusing on deliverables; on measurable outputs rather than the harder to define but much more important outcomes. A time when they can focus on people, collaboration, and on problem solving.

Not just AI itself, but the ‘threat’ of AI, is an opportunity to rethink communications. To stop thinking about communications as a function and to look at communications as a service. A service open to all to use in whatever way the users want. A flexible, adaptive, integrated service. A service that can help de-centralise decision-making, enable voices to be heard, and join people and expertise together. This is what communication should be about, not writing content for the intranet or summarising the CEOs thoughts for the company magazine.

In the 1700s, thinkers and religious leaders argued about men (as it inevitably was) playing god. To what degree should humans attempt to replicate the work of the almighty? Were the secrets of life divine secrets, and would knowledge of them be the path to certain doom for our species? Now, as then, people will spend many years arguing about whether AI is the path to self-destruction or an unshackled utopia. In the meantime, those in communications can look at their small part of the world, and reflect on how they can stay ahead of a clever robot that can write.

© 2020 Simon King.

Simon King is a manager at JLA, the speaker agency. He is the author of the book Predictability — Our Search for Certainty in an Uncertain World and occasionally writes about communication, leadership and culture.

[* author’s note: AI as used here conflates a variety of types of artificial intelligence, as well as machine learning and complex algorithms. The relatively simplistic nature of some systems referred to as AI, but that are actually little more than an output from a processed input, is sometimes merged with things like Artificial General Intelligence, which is very different. The author wishes it to be known that he’s aware this can cause confusion and frustration, but the focus of this article is more about human roles than specifics around AI and technology.]

--

--

Simon King
The Startup

Writing stuff about work, culture, communication and technology.