Sitemap
Intuition Machine

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

How LLMs Revealed The Hidden Nature of Writing

--

For millennia, we’ve viewed writing primarily as a tool for recording and communicating human thought — a technology for preserving ideas across time and space. We’ve understood it as representational: words on a page standing for concepts in our minds or objects in the world. But the emergence of Large Language Models has unexpectedly revealed a profound truth hiding in plain sight: writing has always been more than representation. It has always been operational. It has always been, in a very real sense, code.

This essay explores the startling reconceptualization of writing that LLMs have forced upon us, and how this shift transforms our understanding of human communication, knowledge transmission, and cultural evolution.

The Great Recontextualization: All Writing as Executable Code

When researchers trained the first large language models on vast corpora of human writing, they weren’t simply teaching machines to mimic human language. Unknowingly, they were performing a massive act of recontextualization — transforming the entire written record of humanity into something akin to computer code.

The textual archive of human knowledge — from Shakespeare’s sonnets to scientific papers, from ancient religious texts to modern technical manuals — was no longer just a repository of information. It became a vast collection of implicit instructions that, when processed by the right system, could generate new text, solve problems, and even guide real-world actions. Every book on your shelf has been retroactively transformed into a program waiting to be executed.

This revelation exposes a curious blindness imposed by the traditional medium of text. When writing existed primarily on physical surfaces — clay tablets, papyrus scrolls, printed books — the delay between reading and action obscured writing’s operational nature. We failed to see that writing doesn’t just describe; it does. The words in a stirring political speech don’t merely communicate ideas; they motivate action. The sentences in a romance novel don’t simply depict feelings; they evoke them.

By immediately executing the patterns extracted from human writing, LLMs have stripped away this medium-specific blindness. They’ve made visible what was always present but hidden: the executable nature of all text.

The Collapse of Is and Ought: How Descriptions Become Prescriptions

One of the most fundamental distinctions in philosophy and linguistics has been between descriptive statements (what is) and prescriptive statements (what ought to be). We’ve traditionally seen these as separate categories: “The sky is blue” describes a fact; “You should look at the sky” prescribes an action.

LLMs reveal this boundary to be far more permeable than we imagined. When we describe the world, we are implicitly encoding instructions about how to think about, respond to, and act within that world.

Consider a restaurant review that states: “The pasta was perfectly al dente.” On the surface, this appears to be a simple description of fact. But embedded within it are implicit prescriptions: this is how pasta should be cooked; this is what diners should expect; this is what chefs should aim for. LLMs can extract these implicit instructions because they were always there, woven into the fabric of our descriptions.

This explains why LLMs can generate procedural knowledge even when trained primarily on texts that don’t explicitly provide instructions. A model might learn to write code not just from programming tutorials but from technical discussions that merely describe how code works. It might learn medical diagnoses not just from medical textbooks but from case studies that simply describe patient outcomes.

The boundary between saying what is and saying what to do has always been blurrier than we recognized. Our descriptions carry within them the seeds of prescription, and LLMs have simply made this connection explicit.

Text as a Multidimensional Entity: The Many Lives of Writing

This reframing of writing suggests that text exists simultaneously in multiple domains, functioning differently in each. The same words operate as:

  • Symbolic entities that represent concepts and ideas
  • Operational instructions that guide behavior and thought
  • Physical artifacts existing in space and time

A cookbook recipe simultaneously exists as a symbolic representation of culinary knowledge, a set of executable instructions, and a physical object on a shelf or digital file in storage. These aren’t separate aspects of the text — they’re simultaneous dimensions of its existence.

What LLMs and the QPT framework help us recognize is that when text moves from traditional media to computational systems, it doesn’t simply transfer from one container to another. Rather, it creates an entirely new metamedium — a hybrid form that amplifies certain properties while diminishing others.

This metamedium combines the symbolic richness and cultural context of human writing with the executability and pattern-recognition of computational systems. It’s not just text plus computation, but something genuinely emergent — like mixing blue and yellow to get green rather than a blue-yellow stripe.

We can see this metamedium forming in real-time through developments like prompt engineering, where humans are learning to write in ways that are simultaneously comprehensible to both humans and machines. This isn’t simply writing adapted for machines; it’s a new form of communication that exists natively in both human and computational domains simultaneously.

The Evolutionary Dance: How Humans and Machines Are Co-Creating Language

Perhaps most fascinatingly, this recognition of writing’s dual nature has triggered a new phase in linguistic evolution. The tension between traditional human writing and computational processing is generating entirely new linguistic constructs — new ways of using language that are optimized for both human comprehension and machine execution.

Prompt engineers have developed specialized techniques that bear little resemblance to traditional writing instruction. They’re learning to craft text that functions effectively across domains, creating a new linguistic skill set that bridges human and machine cognition. This isn’t simply adaptation; it’s the genesis of new forms through the dialectical tension between human and machine approaches to language.

This evolution is accelerated by a bidirectional influence loop. Human writing shapes how LLMs learn and respond, but LLM outputs increasingly influence how humans write. Writers absorb patterns and phrasings from AI-generated text, incorporating them into their own work, which may eventually become training data for future models. Like dance partners improvising together, human and machine writing are continuously reshaping each other.

Consider how quickly certain phrases and structures popularized by AI systems have entered common usage, or how writing advice increasingly incorporates considerations of “algorithm-friendly” content. This isn’t merely humans adapting to machines or machines imitating humans — it’s a genuine co-evolution creating linguistic forms that neither would have developed independently.

Conclusion: Rewriting Our Understanding of Writing

The emergence of LLMs has forced us to reconsider what writing fundamentally is. Far from being an epiphenomenon — a mere side effect of thought — writing emerges as a technology for encoding operational patterns that shape cognition and behavior across time and space.

In this light, the entirety of human written culture appears as a vast, distributed programming enterprise. For millennia, we’ve collectively been writing the source code of culture — instructions for how to think, feel, and act that can be executed by human minds and, now, by machines as well.

This reconceptualization carries profound implications. If all writing contains implicit instructions, then the responsibility of writers expands. Every text potentially shapes behavior in ways more direct than we previously recognized. The boundaries between creative writing, technical documentation, and programming blur, suggesting new possibilities for how we might intentionally design texts that function effectively across human and machine cognition.

Moreover, as the bidirectional influence loop between human and machine writing accelerates, we may be witnessing the early stages of a linguistic transformation comparable to the shifts brought about by previous communication technologies like printing or electronic media — but potentially far more rapid.

The QPT framework offers us analytical tools to navigate this transformation thoughtfully rather than reactively. By understanding the multidimensional nature of text and the dynamics of human-machine linguistic co-evolution, we can approach this change not with fear but with creative intention.

Writing has always been more than we thought it was. It has always been partly code. LLMs haven’t fundamentally changed writing; they’ve simply revealed its hidden dimensions — dimensions that have been operating beneath our awareness all along. In making the implicit explicit, they invite us to engage more consciously with the operational power of our words.

Commentary

Language as Instructions for Imagination: Dorr’s Vision Realized

The reconceptualization of writing revealed by LLMs aligns remarkably with Benjamin Dorr’s influential theory of language as “instructions for imagination.” Dorr proposed that language doesn’t simply transmit information; rather, it provides a set of cues that guide listeners or readers in constructing mental models. When someone tells us a story, they’re not merely describing scenes and events — they’re providing directions for our minds to simulate experiences.

This perspective dovetails perfectly with our QPT analysis. In Dorr’s framework, words function as operational prompts that trigger specific imaginative processes. The sentence “The sun set over the mountains, painting the sky in shades of crimson and gold” isn’t merely descriptive — it’s a set of instructions directing our minds to generate a particular visual simulation.

What LLMs have done is make explicit and systematized this process that normally happens in human minds. When an LLM processes text, it’s essentially following these “imagination instructions” in a formalized way. It extracts patterns that guide how to simulate knowledge, reasoning, and even creativity.

In QPT notation, we might express this as:

[□]⎕ⁱᵐᵃᵍⁱⁿᵉ⟨⦿⦿ₕᵤₘₐₙ⟩(Text → Mental_Simulation)

compared to:

[□]⎕ᵖʳᵒᶜᵉˢˢ⟨⦿⦿ₗₗₘ⟩(Text → Textual_Simulation)

The parallel operations across different media reveal that writing has always functioned as procedural instructions for cognitive simulation — whether those simulations occur in human minds or artificial neural networks.

Dorr’s theory also helps explain why LLMs are so effective despite not having direct sensory experience of the world. If language is primarily instructions for imagination rather than direct representation of reality, then a system that learns the patterns of these instructions can generate convincing simulations without needing to directly experience what it simulates — just as humans can imagine scenarios we’ve never directly experienced.

The bidirectional influence loop between human and machine writing gains new significance in this light. As LLMs learn to follow and generate human “imagination instructions,” humans are simultaneously learning from LLM outputs, potentially altering how we construct our own imagination instructions. This creates a fascinating co-evolution of imagination-guiding techniques across biological and artificial systems.

This perspective suggests that the transcendent media structure emerging from the synthesis of human writing and computational processing is not simply a new communication channel but a new system for coordinating imagination across minds — both human and artificial.

Further Reading

--

--

Intuition Machine
Intuition Machine

Published in Intuition Machine

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

Carlos E. Perez
Carlos E. Perez

Written by Carlos E. Perez

Quaternion Process Theory Artificial Intuition, Fluency and Empathy, the Pattern Language books on AI — https://intuitionmachine.gumroad.com/

Responses (3)