In Defense of the Blank Page
Why Generative AI Should Not Replace Writer’s Block
Generative AI is often considered a cure for Writer’s Block and staring endlessly at the “blank page.” It is no such thing. If words always come on a whim, their worth will be questionable.
By Vincent J. Carchidi
Writing, Thinking, and Automating
During an interview in 2023 with a company based in Washington, D.C., I was asked the following, slightly paraphrased question: “We do our best to keep up with the artificial intelligence [AI] revolution here, so how have you been making use of AI tools in your work?”
I immediately thought this was a strange question. I was not being asked, “Do you use AI tools in your work?” Instead, the hiring manager assumed that I use AI tools in my work and, furthermore, that my use of these tools should be quizzed, despite them having little if any relevance to the job itself. In fairness, both my public policy work and academic scholarship concern emerging technologies. My sense, however, was that this question was asked of all candidates, and not at all tailored toward my background (you know the feeling).
The hiring manager’s question reflects a broader disposition toward AI tools like ChatGPT, namely that, at a minimum, they can at last free individuals from the terror of the blank page. In some corners, this freedom from the mundane process of beginning a written work has an air not of voluntary enjoyment, but a sense that one must use generative AI to this end. ‘Sure,’ the disposition seems to go, ‘generative AI isn’t as intelligent as we expected, but it can improve our lives and enhance our productivity in less revolutionary ways.’ Generative AI as a “writing assistant” has emerged as one such all-too-reasonable use-case.
The sentiment can be found close to AI’s home. Yann LeCun, Meta’s Chief AI Scientist, posted on X in 2022 that the (ill-fated) Large Langauge Model (LLM) “Galactica” “is to paper writing as driving assistance is to driving. It won’t write papers automatically for you, but it will greatly reduce your cognitive load while you write them.” He posted again in 2023, once his LLM fatigue set in, writing that, although “LLMs are still making sh*t up, [t]hat’s fine if you use them as writing assistants.”
Some are more bullish on generative AI’s writing assistance than LeCun. Innovation and entrepreneurship scholar Ethan Mollick of Wharton recently questioned in a blog post whether his latest book will be “the last one I write that features mostly my own writing.” (One naturally wonders why his name would be associated with it, but we put this pressing matter aside.) Mollick details interesting use-cases of generative AI. He explains how he prompts LLM-powered chatbots to adopt different personalities, one of which is “Ozymandias,” who is tasked by Mollick with helping him write a book chapter: “Your job is to offer critical feedback to help improve the book. You speak in a pompous, self-important voice but are very helpful and focused on simplifying things.”
This “Cyborg” writing method, as Mollick calls it, is certainly interesting. AI is meant to become a partner in one’s writing, adhering to the personalized requests of the author who ultimately chooses which words to include or not in the final product. What, though, is changing in the writing process as a result of leveraging this “co-intelligence”?
Jane Rosenzweig, Director of Harvard College’s Writing Center, argues that generative AI as a writing assistant misses the point of writing. The promise of freedom “from the drudgery of writing so that we can use that time to do more important work,” she says, presents a problem: “In many cases, writing is the important work.” She goes on to observe that writing is the “process of getting something onto the page [that] helps us figure out what we think — about a topic, a problem or an idea. If we turn to AI to do the writing, we’re not going to be doing the thinking either.” Rosenzweig goes on to illustrate how adopting generative AI writing assistance is understandable but quickly turns into a retreat of the human author from the actual exercise of writing to, ironically, the work of assisting the AI as the human pieces together its outputs.
Rosenzweig is on to something here, but I want to go further — what does writing have to do with thinking? And what does that have to do with generative AI and the blank page?
Deeper Into the Writer’s Psyche
One often hears of a writer having a “voice.” What’s meant by this is likely intuitive to anyone who writes professionally or for pleasure (or both). Though, it does help to unpack it.
Having a “voice” in writing can mean several things (for example, having a “voice” in political writing may have more to do with reach and coverage than what we are interested in here). One relevant interpretation defines a “voice” in writing as being recognizable to the reader as an author who is actively and thoughtfully engaging with the topic at hand from their subjective perspective of it. The writer who has a voice espouses a point of view; the perspective of someone with an experience appropriately conveyed through words on a page. In contrast, the writing in instruction manuals for new furniture tends to not have a “voice,” in this sense, as it simply describes an assembly process rather than expressing a personal judgment or experience.
To have one’s “voice” reflect off the page into another’s mind as they read the words is not to speak without purpose, as if writing is an exercise in tedious grammatical combinations. The writer’s “voice” reflects their considered opinions and speculations; it tells the reader that they are engaging with the thoughts of a mindful human being. There is an assertiveness to writing not in the sense that good writing is a series of bold proclamations, but that the combinations of words are the chosen product of someone who has “something to say.”
Generative AI does none of this. To suggest it does is to misunderstand the technology. Even putting aside the excessive politeness of ChatGPT, all LLMs that have the “conversational” style have been fine-tuned in such a way as to make them superficially polite and ready to appease. They tell us what we want to hear, not what we should hear. Indeed, evolution and intelligence researcher David Krakauer hit the nail on the head in observing that LLMs like ChatGPT-4 “don’t have to survive in the world. They have to survive by convincing us they’re interested to read.” This insight goes beyond the superficial politeness of some chatbots to include their entire artificial species — even X’s “Grok,” designed to be “anti-woke,” uses its words in a way that conforms to the expectations of the human end-user.
Having the thoughtless AI assistant generate words for someone staring at the blank page is no sin. The problem with relying on generative AI in writing is that self-confidence is as much a part of good writing as, say, proper grammar. Having a voice means knowing what one is about — knowing who they are, what they believe, and where they want to position themselves in a broader community of individuals who have similarly carved out their niche. Self-confidence is a prerequisite for the active and thoughtfully engaged voice one finds in good writing. Generative AI, in contrast, is “boring.” It has no self-confidence because it has no self. It offers no flair, no spice, nothing that may cause your thoughts to churn the way another human’s words can.
Rosenzweig’s pushback against AI in writing makes sense against this background: “I didn’t use an AI assistant because I was not interested in finding out what an algorithm would predict someone could say about this topic. I wanted to figure out what was troubling me about it.” This helps us to see what is flawed about the “Cyborg” or “co-intelligence” approach promoted by Mollick: when you work with another human being on a shared project of mutual interest, you often have to put aside preconceptions or opinions about the topic to avoid unnecessary friction. To do this properly amounts to possessing a very specific and very useful skill, one that grants you the ability to “see” further into the landscape than you previously could and enables your engagement with a world of thought and belief that was inaccessible. To “see” further, in this sense, is a choice that individuals make — it cannot be made for them.
The process of writing with another person is no different. Generative AI writing assistants, by their very nature, are incapable of replicating the human-to-human experience. They are capable of “combinatorial creativity” which, as Giorgio Franceschelli and Mirco Musolesi put it, “involves making unfamiliar combinations of familiar ideas.” Remember when ChatGPT was making waves for writing about losing one’s sock in the laundry room in the style of the Declaration of Independence? That’s a prime example of combinatorial creativity — it takes the elements of formal eighteenth-century English and the cliché of losing one’s sock in the laundry and puts them together in a new way.
Of course, a human had to tell ChatGPT to do this; that is, a human had to decide which concepts they wanted to see combined. This illustrates the unfortunately sharp limits of generative AI — they not only do not engage with the boundaries of conceptual spaces and beyond, but they also have no autonomy in concept selection and application. A human writer must juggle all of these while ensuring that they do not lose their voice amid multiple and often competing responsibilities.
The Value of Frustrated Writing
Frustration is a flashing indicator light on the writer’s figurative dashboard: it indicates a choice: write the piece based on your impression of the topic, or accept that the frustration felt from not having yet accomplished what you set out to do is a sign that your own thoughts on the matter are insufficiently clear. The value of this frustration is frequently lost on those promoting generative AI in writing because they mistakenly believe that thoughts worth publicizing are ready-made and that the real work is finding a way to put them on a page.
Rigorous thoughts, however, are not ready-made, and good writing entails an acceptance of this fact; an acceptance, that is, of the frustrating process of articulating, reframing, and refining fragments of thoughts through words.
While it is commonplace for some — like the aforementioned interviewer — to accept that generative AI is not up to the task of scholarly writing, it became somewhat cliché in 2023 to view other, more creative types of writing as old news.
The perniciousness of conceiving of generative AI as a writing assistant is believing that the process used to adopt terms (jargon) that denote specific meanings in academic writing is radically different from that used in writing about one’s considered opinions and speculations; emotions, attachments and regrets, trials and tribulations, elation or depression, and all manner of other experiences require careful deliberation to put into words that adequately convey one’s thoughts to another. Scholarly writing and creative writing both aim at the refinement of thoughts and concepts, differing in the purposes of this refinement.
I do, to be sure, use writing software. Grammarly is perhaps the best spelling- and grammar-checking software I have come across, and I use it routinely. It catches mistakes I would likely miss otherwise, even with careful proofreading. Grammarly oversteps its welcome, though, when it tries to simplify or remove various phrasings that I have deliberately chosen (right now it is, annoyingly, highlighting “perhaps” and “to be sure” in the preceding sentences). Sometimes its re-phrasings are helpful. More often, they water down my voice into something easy — but good writing reflects good thinking, and good thinking isn’t easy!
What I mean by this is quite literal — when I use the phrase, for example, “to be sure,” I am not merely preparing the reader for a qualification to my argument; rather, I am preparing myself for a qualification that I may not even be aware of yet. What the reader sees on this page is the end-result of my own wandering around in the dark until I find the words I need. Much like Michael Scott, I often begin writing a sentence with no real idea of where it is heading, yet the words that I choose imbue a certain faith that there are thoughts in my head worth extracting in rather precise ways. Words are their vehicles.
Does this always work out? No! Much of what I have written is in a downtrodden and generically named “documents” folder on my computer whose older entries are rarely visited. This, however, is where the value of frustration in writing is truly evident. Each of my unpublished writings was an attempt to capture and structure a series of thoughts that I consider to be the product of deliberation and reflection. To have anything published in a reputable outlet, no matter the field it occupies, is to accomplish this feat. Editors often get flak for being a persnickety bunch — and this is sometimes true — but good editors know that the best writing is as much in the details as the higher-level structure. Thus, to have a good editor accept one’s writing is tantamount to having one’s thoughts recognized as coherent and productive by someone who has the skill of identifying them. Writing is at once a personal and a social activity. (Grammarly, by the way, does not like my use of “persnickety.” Too bad for Grammarly.)
Now, let me be clear: editors often look at my written thoughts and politely say “no.” It’s not my favorite reaction. In fact, it can be downright frustrating. This frustration is most felt when the article has already been bounced between outlets. “How else,” I might think to myself, “can I even write this?” Having already put (what I believed to be) my full effort into the piece, facing the prospect of yet another go-around is frustrating and burdensome. Yet, stubborn as I am (and I am), the process of facing down once again the body of text that before proved insufficient has a habit of yielding clarity. Relying on generative AI as a writing assistant or a co-intelligence enables the writer to simply avoid facing this frustration, leaving them in the dark.
Conclusion
Writing is an assertive act. Good writing reflects the confidence of the writer who is aware of how they stand in relation to others, actively and thoughtfully engaging with the thoughts and beliefs of others in a manner of their own choosing. A writer chooses what to think about and how to think about it. Tinkering with language in this way is not only practical but fulfilling. As philosopher James McGilvray observed, “It is remarkable that everyone routinely uses language creatively, and gets satisfaction from doing so.” His remark is echoed by AI expert Alberto Romero’s contention that it is a “pleasure” to put words on a blank page.
None of this means that generative AI does not have other uses. I have found it fascinating to toy around with in its own right. I also, professionally speaking, feel obligated to see for myself how well the latest and greatest accessible models operate. The issue with AI in writing today is specific to generative AI — these systems do not have minds like ours, and show no promise of reaching such heights. It is perfectly conceivable that an intelligent system will one day be able to fulfill the role of a “co-intelligence” in writing, as Mollick put it, but I do not believe that day is here. It would thus be a shame if generative AI were taken as a “solution” to a problem — the frustration of the blank page — that should continue to exist.