Deepfaking Humanity

Brian Kelly
meaningful
Published in
4 min readApr 27, 2023
Photo by Serge Le Strat on Unsplash

What’s all the buzz about ChatGPT? No, really, I’m asking.

The first time I saw copy generated by ChatGPT, I was not amazed.

My partner at Meaningful asked it to write a short promo for one of our Be Meaningful Podcast episodes. The uninspired result relied on generalities and cliches to feign the appearance of saying something, but, in the end, used 150 words to say virtually nothing.

All veneer. No substance.

My conclusion on this experiment, and pretty much everything else I’ve seen, is that AI is essentially a deepfake of humanity.

And there’s a reason for this. The technology.

AI writes poorly because it’s using prescriptive analytics, which calculates the most probable outcome.

In the same way AI uses presecriptive analytics to reliably tell you what the business outcome will be for a specific set of actions, ChatGPT scours the library of human vernacular to serve up the most probable way of saying something.

Which is a problem if you want to stand out.

Saying something the way most people are likely to say it may score points for accuracy or efficiency, but it fails at doing something that only real humans can do: say something in a compelling or unexpected way.

Like Shakespeare. Or Hemingway. Or Tim Delaney.

My experience of being underwhelmed with ChatGPT was only reinforced during a recent airing of 60 Minutes that featured a story on AI. I sat there watching Scott Pelley tear up as he read Bard’s expository attempt on the prompt “For sale: baby shoes, never worn.”

What Bard spat out was little more than a littany of obvious inferences anyone would draw from those six words: a woman was pregnant… bought the shoes for the baby… but unfortunately the infant died… and the grieving mother was now selling the shoes with a heavy heart.

By explaining the obvious, AI had stripped a haunting story of its power, repaving it with bad writing that failed to stir emotion or ignite imagination. Bard had made a personal tragedy sound like someone washing dishes.

Similar to what ChatGPT did with our podcast. And what it will do to your email marketing or website copy. Which might not matter if your marketing is nothing more than throwing chum in the water and see what you net. (Something, sadly, that most marketing has devolved to in this data-driven age. Why sweat the copy if you can continuously replace it with something else until the needle moves? This explains why, despite all the automated marketing platforms and data analytics, click through rates are on par with direct marketing from the 1980s. That’s not digital intelligence. It’s just seeing how shitty you’re doing in real time. But I digress…)

Reducing work? Or payroll?

Writing critique aside, the critical question is not what ChatGPT or Bard can do, but why it will be used. Like all automation, AI can be used to take repetitive and menial tasks off our plate so we’re able to focus our time and energy on higher value activities, like customer relations or cultivating innovative approaches to market needs.

Or it can be used to replace us.

Which catapults us into the worthwhile examination of Shareholder vs Stakeholder Capitalism… something I wrote about a few months back.

When I was Director of Corporate Marketing for a SaaS company in the healthcare space, automation was a huge part of the value proposition. Which scared people, because they interpreted it as a license to reduce staff. So we went to great pains communicating how technology was not an occupational threat, but actually freed staff to engage in higher value work that brought personal reward and grew the business.

Understanding that people found themselves spending most of their time doing things they hated, but had to do just to remain in business, we penned the pithy “Don’t hate, Automate” that repositioned automation from a threat to an ally.

AI loves to do thing things we hate. (Clinicient messaging for 2016 industry conference)

This mantra instantly resonated with a universal gripe and animated every person we met to conjure up all the wonderful ways they would redeem the time with patient care. Which is why they got into the business in the first place.

You can’t automate creativity.

Let me conclude with this sage advice: let technology do what it does well — be objective and fast — and let people do what they do well — be subjective and creative.

This is the reason AI should be used. Not to do our work for us. But to do the mundane work that keeps us from doing our best work.

The same day I published this story, the New York Times ran an article on ChatGPT that only affirms everything I wrote. If you have proof to the contrary, please share it with me… brian@bemeaningful.co

--

--

Brian Kelly
meaningful

I help brands find meaning in a world that’s looking for it.