The Author and Me and ChatGPT

Leigh Engel
Kellogg Business Journal
6 min readMay 2, 2024

Confronting the Disappointment of “wait, did an AI write this?”

During my favorite product management class last term, a guest speaker shared that she was writing a new book about product development in which generative AI was her “co-author.”

This speaker was undeniably impressive. Her CV spanned investment banking, consulting, and product management, where she had launched features that I recognized instantly at two brand name companies. She was totally charismatic too and captured the attention of our evening class for 45 minutes.

And yet, I was surprised by the wave of disappointment I felt when she described ChatGPT’s prominent role in her upcoming composition. My reaction was connected to a question that irks me every time I ask it, “wait, is this thing I’m reading from a human or AI?” As an AI product designer, I am aware of the proliferation of artificially generated works in the mainstream. And as a frequent reader of news articles, blogs, cover letters — mediums where I expect original, personal thinking — the discovery of artificial interference involvement seems to ignite an undeniable sense of rancor within me every time I encounter it. Why?

Much has been written about the moment where humans aren’t able to discern whether what we are consuming comes from a person or a machine. And many of those writings are quick to reference the Turing test.

Will Oremus and Cade Metz do so in respective articles; Google’s AI passed a famous test — and showed how the test is broken (Washington Post) and How Smart are the Robots Getting? (NYT) contextualize the invention of large language models as epochal in the history of computing.

Both authors emphasize that while today’s AIs can generate convincingly human prose, such ability reflects imitation rather than intelligence. And both suggest we need a new test, which seems to be the position of many who invoke Turing into the how-should-we-qualify-gen-AI conversation.

I don’t disagree; however, it’s in the beat after where I tend to find myself alone. Instead of imagining how this new test should be composed (as aforementioned authors both do), I find myself dwelling on the significance of that fact we need a new one in the first place.

We are all asking ourselves, “real or not real?” more.

AI was not the first media invention to breed questions of veracity. As 19th century yellow journalism or the 2016 US election prove, humans — particularly politically motivated ones –have been able to propagate disinformation at scale for years. But the uptick in widespread distrust seems particularly potent today.

In America alone: 50% of people believe that news organizations deliberately mislead them (Gallup), faith in government has plummeted from 77% to 22% in the last 60 years (The Atlantic), and every successive generation since Baby Boomers has reported lower levels of social trust (Politico).

Jedediah Britton-Purdy sums up our skeptical climate:

“…our physical, ecological, climatic world and our communicative world of words and images are now uncertain, shifting, shimmering uncannily between what is solid and reliable and what is ontological and epistemic quicksand. Never mind whether you can trust what other people will do. Can you trust them to be other people at all?” (The Atlantic)

“Other people.” No wonder my connection to a writer with an “AI co-author” grows tenuous. I am distracted by the knowledge that I’m not even from the same species as…it.

I mentioned earlier that I read a lot of cover letters. One of my primary responsibilities as an MBA career peer advisor is to review these for students who, if they are meeting with me, tend to be seeking product management jobs in AI.

While cover letters are often viewed as superfluous in tech, I openly encourage writing them because doing so demonstrates what is arguably the most important quality in being a product manager, persuasive communication. So, the first time I realized a letter that I was 15 minutes into editing came from ChatGPT, I was nonplussed. I didn’t know what to think. On one hand, usage of the very technology the applicant was trying to build showed a compelling fluency. On the other, his claims of independent thinking, also in the note, felt undermined. And most salient, I remember feeling like this intimate, if one-sided, dialogue between him and me was interrupted. And I couldn’t get back into it once I kept reading.

By Turing test standards this cover letter didn’t pass, but that wasn’t the point. It didn’t matter that excessive adjective use prompted me to ask the student, “did you use chat[GPT] to do this?” The doubt was there. The thread was cut.

In writing this, I’m still torn as to how much I endorse using generative AI to write personal statements. The student in this example was trying to write the best letter he could. The fact that he was meeting with me in the first place demonstrated thoughtful effort. I don’t fault him at all. Plus, in the months since that occurrence I’ve read loads more cover letters which leveraged ChatGPT — the most compelling of which are from classmates of mine for whom English is a second language. ChatGPT has been a game changer for them and once they meet with me, a native English speaker who can catch the occasional unnatural tendencies of AI writing, I believe the combination of resources leave them better off than they would have been otherwise.

I just can’t shake how much I resent it as a reader.

I was an English major in college, and I’ve often contemplated why “AI-generated writing” bothers me.

After some self-reflection, I don’t actually think that I’m upset by people being able to do more easily what I invested so much time into practicing–writing well ;). But the experience of reading any of it feels somehow cheapened.

I’ve long felt a connection to what I read and the people who write it. I have this memory of reading The Federalist Papers while at UVa and being stunned that those 85 essays just poured out of Madison, Jay and Hamilton. Hamilton’s “The Consequences of Hostilities Between the States” has no footnotes. His “source” was familiarity with the subject. And while I’m not advocating for the abdication of references in political opinion today, I certainly remember being struck by how clearly I could picture the essay I was reading being written. Focused. Thoughtful. Uninterrupted (almost excessively given the number of run-on sentences). These are the qualities I most fear losing to an AI co-author.

Also at UVa, I had a professor who claimed that “no thought is real until it is written down.” Writing it down forces further consideration. The words have to pass through your own BS filter to make it to the page. I think that inspires confidence in both writer and reader. Maybe Metz or Oremus’ permutation of the Turing test will reestablish this confidence. Maybe we’ve entered an era of skeptical readership. Maybe generative AI will elevate the work of authors we wouldn’t have heard from otherwise — like my English as a second language peers. That’s my most optimistic hope. But as we’ve done for ages, we’ll have to read to find out.

You can reach Leigh, MMM ’24, on her LinkedIn.

Sources:

  • Google’s AI passed a famous test — and showed how the test is broken by Will Oremus (Washington Post)
  • How Smart are the Robots Getting? by Cade Metz (NYT)
  • American Views 2022: Part 2, Trust Media and Democracy (Gallup and the Knight Foundation)
  • We’ve Been Thinking About America’s Trust Collapse All Wrong by Jedediah Britton-Purdy (The Atlantic)
  • He Diagnosed America’s Trust Problem. Here’s Why He’s Hopeful Now by Ian Ward (Politico)
  • The Federalist Papers by Alexander Hamilton, John Jay and James Madison

--

--