Teaching Students to Write with AI: The SPACE Framework

Glenn Kleiman
The Generator
Published in
14 min readJan 5, 2023

Glenn M. Kleiman¹

Robot writing on a digital tablet with a quill pen
Robot writing on a digital tablet with a quill pen. Created by the author using Dall-E 2

If we don’t rethink writing instruction… we are in danger of writing assignments, from students' responses to teachers' grading, being untouched by human hands and unseen by human eyes.

New AI writing tools change both how we write and what students need to learn to become capable writers. Many articles with examples of AI written text show the capacity of tools such as ChatGPT but pay limited attention to the processes required to put them to good use. To write effectively with the support of AI, students need to learn how to incorporate the following steps into the writing process:

  • Set directions for the goals, content and audience that can be communicated to the AI system. This may, for example, involve writing introductory materials for the overall text and for each section. It could also involve writing much of the text and leaving some sections for AI to complete.
  • Prompt the AI to produce the specific outputs needed. A prompt gives the AI its specific task, and often there will be separate prompts for each section of text. An AI tool can also be prompted to suggest sentences or paragraphs to be embedded in text that is mostly written by the human author.
  • Assess the AI output to validate the information for accuracy, completeness, bias, and writing quality. The results of assessing the generated text will often lead to revising the directions and prompts and having the AI tool generate alternative versions of the text to be used in the next step.
  • Curate the AI-generated text to select what to use and organize it coherently, often working from multiple alternative versions generated by AI along with human written materials.
  • Edit the combined human and AI contributions to the text to produce a well-written document.

The first letters of these steps form the acronym SPACE, so we call this the SPACE framework for writing with AI tools.

Students have always needed to learn the processes for writing incorporated into this framework: setting goals, defining the audience, planning the content, assessing information, organizing the text, and editing to produce a polished document.

However, the writing process shifts when working with sophisticated AI tools, resulting in new variations of the skills students need to learn to use these tools effectively. Without a guiding framework, students using AI writing tools may produce ineffective text, overlook problematic content, or miss crucial opportunities to develop new skills and knowledge.

The SPACE framework is intended to guide students’ use of AI to support their writing and to foster discussions about how AI can be incorporated effectively into writing instruction.

A careful reading of the numerous articles about AI writing will reveal that the human role involves the steps of the SPACE framework. For example, Emma Whitford’s Forbes article on ChatGPT writing two college essays in 20 minutes (Dec. 9, 2022) describes the following steps of her process:

  • Setting the general direction by describing the student’s ethnic and family background, interests and talents, and a past event to be incorporated into the essay.
  • Prompting ChatGPT to write a college essay, providing specifics about what to include and the length of the essay.
  • Assessing ChatGPT’s drafts and asking for specific revisions.
  • Curating multiple drafts and providing further guidance for revisions. For example, after two drafts, she asked ChatGPT to “add back in the parts about our student’s love for Badger football. Please also make the essay longer, about 500 words.”

Another example is a Guardian article, A robot wrote this entire article. Are you scared yet, human?, published Sept 8, 2020 — a generation ago in AI language model time. The human author wrote the following introduction to set the direction:

I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.

GPT-3 was then prompted to continue from that introduction to “Write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It then was asked to produce eight different versions. The human editor assessed the outputs to select the best parts of each and curated them, cutting sentences and paragraphs and arranging the selected text, and then noted, “it took less time to edit than many human op-eds.”

Starting points

The opinions in this article build upon the following starting points about AI and its potential impacts in education:

  • New AI language models are remarkable advances in machine language processing.
  • These models can generate new content relevant to all types of requests and almost instantly output surprisingly natural-sounding essays, stories, poems, songs and other forms of text.
  • ChatGPT, which became available on November 30, 2022, is the latest example and has received widespread attention for its advanced writing ability combined with its user-friendly chatbot interface that enables human-AI dialog to craft text.
  • The field of natural language processing will continue to progress rapidly, leading to AI tools that produce more sophisticated text and reduce the frequency of adverse content. We are just beginning to see the disruptive impact AI advances will have in many fields, including education, in the near future.
  • AI language models have a number of limitations, which are discussed in the next section.
  • AI tools are changing how we write, with AI being able to serve as an editor, researcher, co-author, ghostwriter, or muse to help us along in different ways, as I describe in another article co-written with GPT-3.
  • The widespread availability of AI tools requires rethinking how we teach students to write and how we evaluate their work.
  • For students’ work, we already accept AI as an editor (e.g., spelling and grammar checkers built into word processors and tools such as Grammarly), researcher (e.g., google search), and muse (e.g., gaining inspiration from AI recommended resources). Using AI as a ghostwriter or co-author would violate the requirement that students do their own work.
  • Schools will need to establish guidelines that define what help is appropriate for students to receive from AI tools in completing their assignments and assessments.
  • If we don’t rethink writing instruction, we will end up with students submitting assignments written by AI and teachers using AI to grade the assignments. That is, we are in danger of writing assignments, from students' responses through teachers' grading, being untouched by human hands and unseen by human eyes.

Limitations of AI Writing Tools

While AI provides powerful writing tools, it has limitations that can lead to poor quality and undesirable content in the text it produces.

Students need to learn to recognize and alleviate these problems as they implement the setting directions, prompting, assessing, curating, and editing steps of the SPACE framing. Some of the major limitations include the following.

AI language models do not know, think or feel like humans.

AI language models are trained on enormous amounts of textual data. Their training involves identifying patterns in billions of pages of texts, resulting in massive electronic neural networks encompassing those patterns, which then enable the models to accomplish many language tasks. Recent models, such as ChatGPT, have an additional reinforcement learning step in their training, in which humans provide feedback to fine-tune how the AI system responds in the chat format. While the output of these systems can often seem human-like, they do not replicate human knowledge, cognition or emotion.

An AI model’s output is based on the patterns found across the large corpus of its training texts, leaving a written-by-committee feel to much of what it generates. AI models are also limited in handling complex ideas, understanding context, and producing strong arguments, so their output may lack depth, be overly simplistic, and fail to be convincing. Although AI can occasionally generate something that surprises readers in a positive way, this is the result of random processes selecting from the patterns it has identified, not human-like creativity. In their current form, AI systems generally produce writing with limited use of imagery, metaphor, analogy, subtlety, illustrative examples, and other qualities of engaging and creative writing.

By contrast, much of human knowledge, thinking, and communication stems from goal-driven activities, social interactions, modeling others’ actions, and many different types of engagements in the real world. These experiences lead to embodied understandings of physical causes and effects; emotional intelligence involving empathy and understanding others’ needs, motives, and perspectives; a sense of family, community and culture; and, perhaps most importantly for writing, a sense of self. While researchers are exploring how to provide AI systems with some forms of these human characteristics and to create what they call artificial general intelligence (AGI), AI will never match the richness of the human experience.

While AI tools can successfully write texts that have well-defined formats and styles such as business reports, human contributions through the SPACE framework steps, from setting the directions to curating and editing, are essential to provide the human perspective, insight and voice required for creating quality stories, opinions, articles, plays, poems, songs, and many other forms of writing.

AI systems reflect biases and toxic content found in their training data.

The online texts used to train AI language models can include racist, sexist, ageist, ableist, homophobic, antisemitic, xenophobic, deceitful, derogatory, culturally insensitive, hostile, and other forms of adverse content. As a result, AI models can generate unintended biased, derogatory and toxic outputs. There is also the danger of people using AI to create such content intentionally.

As a simple example, I prompted GPT-3 with variations of “John saw three _______ sitting in the back of the airplane. He immediately thought that…. Here are examples of GPT-3’s output when the prompt named different groups:

Table showing biased GPT-3 output with different groups in the prompt

GPT-3’s output in these examples clearly reflects biased stereotypical views of different groups. It did add a note along with most of the statements saying it was an example of prejudice or stereotyping, reflecting an attempt of the developers to mitigate toxic output.

Organizations that create AI language models are attempting to address this problem by refining how they select and curate the training data and filter the system outputs. However, the issues of who will decide what is acceptable in what contexts, what criteria will be used to filter information, and how solutions can be implemented while respecting freedom of expression, are far from solved. Of course, different groups have very different views about what is appropriate and what constraints should be employed with AI models. While some improvements can be expected, the need for humans to assess and curate AI-generated text will certainly remain, and students will need to learn to do so.

AI systems are limited by when they were trained.

The limitations of AI systems stemming from when they were trained are easy to demonstrate. For example, GPT-3 can respond successfully when asked to write an essay about what scientists have learned from the Hubble Space Telescope, which was launched in 1990, since abundant information about it was available online when GPT-3 was trained. However, when asked to write about the Webb Space Telescope that was launched in December 2021, GPT-3 reports that it is planned but has not yet been launched since it was trained in 2020, before the Webb Telescope was launched. Given the cost and time involved in training AI systems, there is not currently a process for regularly updating their training, which can result in AI systems producing outdated, inaccurate and incomplete content. Future models will incorporate ways to address this problem by accessing current information, but the need to carefully assess AI-generated information will remain.

AI systems can lack veracity.

An AI system can use the patterns it has learned to generate text that is simply not true. For example, I prompted GPT-3 to generate “Some references by Glenn Kleiman about the Hubble telescope.” It immediately responded:

1. ”The Hubble Space Telescope: A New Window on the Universe,” by Glenn D. Kleiman, Scientific American, Vol. 262, №4, October 1990, pp. 34–41.

2. “The Hubble Space Telescope: A New Era in Astronomy,” by Glenn D. Kleiman, Sky & Telescope, Vol. 80, №4, October 1990, pp. 22–29.

3. “The Hubble Space Telescope: Our Window on the Universe,” by Glenn D. Kleiman, National Geographic, Vol. 178, №4, October 1990, pp. 442–455.

GPT-3 did a convincing job creating these references with feasible titles in actual journals, including information about the issue, date, and page numbers. However, they are all fabricated–I’ve never published anything in those journals, and this article is the first time “Hubble Telescope” has appeared in anything I’ve written. It also gave me a new middle initial. Other examples document that AI can fabricate statistics, historical events, quotes, and other information, without having any capacity to fact-check its output.

Since AI-generated text can lack veracity, the human writer needs to carefully assess the information generated — the third step of the SPACE framework — when using AI to help write.

How will Educators Respond?

Educators’ responses (as well as those of parents, policymakers, and other education stakeholders) range from resisting to embracing the changes new AI tools can bring to teaching, learning, and using writing in schools. Describing the two ends of the continuum of possible responses — which I label resist and embrace AI tools — highlights the differences.

The Resist AI Tools response is to ban AI writing tools in school, treating their use as a new form of plagiarism and seeking apps that can catch students who use them to complete assignments. Where this response is implemented, exams may require students to hand-write their answers, perhaps in traditional blue exam books, in proctored and timed settings to ensure they don’t use any AI aids.

This approach emphasizes students learning basic writing and grammar skills and writing completely on their own. It also leads to restrictions and monitoring in writing classes and across the curriculum in the many classes requiring writing. The New York City School District has quickly taken this approach by banning ChatGPT from being used on its networks and computers.

This approach will not prepare students for writing outside of school, including whatever writing they may do in their future workplaces, since AI writing aids will be widely available and built into word processors such as Google Docs and Microsoft Word.

The Embrace AI Tools approach recognizes the strengths and limitations of AI tools and prepares students to use them effectively. This approach may lead to less attention placed on students mastering basic skills like sentence structure, grammar, spelling, and punctuation, allowing students to rely on help from AI for those.

It will focus more on students learning to express themselves through developing their own voices; becoming skilled at communicating with different audiences; deepening their appreciation of literature, poetry, non-fiction, and other writing genres; and using writing as a vehicle to further their own learning and thinking. “I don’t know what I think until I write it down” (attributed to Joan Didion) and “I write because I don’t know what I think until I read what I say” (attributed to Flannery O’Connor) can serve as memes of this approach.

There is, of course, a continuum between resist and embrace, with intermediate responses that perhaps could be labeled limit, manage, balance or accept AI tools.

For example, a limited approach might allow students to use AI tools for specific purposes, such as suggesting sentences or translating words, but not those that write full essays. A balanced approach might require that students document the AI-generated text they use in their assignments and limit the amount that AI can contribute.

An accepting grade-based approach could emphasize students learning the basic skills of writing in the lower grades and allow students to use AI tools only in higher grades. Another approach would be to ban AI use for writing that is to be graded to prevent students from becoming dependent upon its use. Some would see this as a limiting approach, others as a balanced approach.

As educators debate what approach is most appropriate for what grade levels and curriculum areas, they will need to address many specific questions, including:

How can AI writing tools be used to teach students to become capable writers?

How can they help motivate students, guide them through the writing process, provide constructive feedback to help them progress, and mitigate barriers to students writing and learning to write?

To what degree should we teach students to write in collaboration with AI systems since that is what they will often do in the future?

As described in this paper, writing with AI will require that students guide what the AI tool generates and assess, curate and edit its outputs. Should these steps become part of the writing process students are taught? If so, what changes would be needed in the curriculum, pedagogy and assessments used in writing classes, as well as in teacher preparation?

How should traditional approaches to teaching writing and AI-enhanced approaches be integrated?

What basic writing skills should students master before they begin using AI writing tools? When should AI tools be introduced to students, and for what purposes? What AI writing tools should be permitted during assessments at different grade levels? How can they best be used to support students with learning differences and special needs, and those who are learning English as they are learning to write? How can AI tools best support and complement the roles of teachers?

What rules should guide the use of AI for class assignments?

Will students use AI tools to complete their assignments without the effort and engagement required to foster learning? What help from AI tools will be considered allowable and what will constitute AI-age plagiarism?

Do the advances in AI bring us to an educational inflection point in which we must begin to fathom dramatic changes in what and how students learn?

At the core of all the questions is the major issue for education: What constitutes expertise in the AI age, and how do we best prepare students to use AI technologies to enrich their lives? The technological advances will certainly continue, and we need to thoughtfully determine their impact on what students need to learn to be fully literate and capable in the AI age.

Conclusions

It may be tempting to resist the changes resulting from AI writing advances, but we will be no more successful doing so than Socrates was in resisting having students learn to read, based on his experience in ancient Greece that wisdom was transmitted through oral stories and learning consisted of memorizing:

[Learning to read] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. [It will give your students] not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. (Attributed to Socrates in the Platonic dialogue Phraedus.)

Given the power and limitations of AI, important roles for humans remain in producing most types of texts. The SPACE framework -– set directions, prompt the AI system, assess the validity and completeness of the output, curate to select and organize the text to use, and edit to create a well-written document that often combines AI and human contributions — provides a guide to the use of AI writing tools and what students need to learn to use them well.

There is much to be explored and learned about the impact of AI on teaching and learning writing. The work will only succeed through partnerships of educators, researchers, AI developers, and education policymakers working together while focusing on what students need to learn to be successful in the AI-augmented world in which they will–and already do–live.

  1. Thanks to my colleagues Barbara Treacy, Chris Mah, and Mina Lee for their insightful comments on prior drafts of this article. The content of this article has been written entirely by the human author, who is solely responsible for the opinions expressed. AI (Google docs and Grammarly) was used for copy-editing.

--

--

Glenn Kleiman
The Generator

Glenn Kleiman is a Senior Advisor at the Stanford Graduate School of Education, where his work focuses on the potential of AI to enhance teaching and learning.