Using a Table Trick to Fix Inconsistent Terms in ChatGPT 4 (and more)

Peter Dresslar
Hawai’i Center for AI
6 min readMar 17, 2024

--

When using AI to draft lengthy documents, inconsistent terminology can often creep in. In this post, we’ll explore a simple but effective trick for getting clearer AI feedback on document consistency: asking for a document report as a table. We’ll see how this technique helped wrangle a 40-page contract and how it can be applied to various AI writing assistants.

Aloha! I thought I’d share a particularly successful set of ChatGPT 4.0 prompts that worked for me to clean up some inconsistent terminology I ran into in a contract that was recently drafting. I don’t know how often I’ll do this, but I could imagine some of our recent and upcoming workshop participants finding the approach useful…

Breaking out document prose into chunks you want to generate is not a novel idea, but we frequently discuss the approach during Hawai‘i Center for AI’s workshops for addressing business needs with AI. We generally work in ChatGPT during these workshops, since ChatGPT might be argued to be the best and most stable environment — as of this moment in 2024 — for novice and moderately experienced business users who are focused on general work tasks.

The idea with breaking documents into subsections is straightforward: ChatGPT tends to be most successful generating content in 3–5 paragraph passages, which usually translates to a subsection in a standard work artifact. Knowing this, a very effective strategy with ChatGPT is to start with an outline and work subsection to subsection on completing a draft.

(ChatGPT, being famously verbose by default, occasionally pushes the 5 paragraph boundary. I suspect a better business guideline for a subsection would be 2–4 paragraphs on average. MIT says 1 or more paragraphs, in fact. I could go on, but I’m being a bit Chatty myself.)

Regardless of your precise approach to “chunking” out your work, it’s typical to run into problems with key terminology. Suddenly, the thing that ChatGPT named one way is something that you named a different way, and both terms drift through the document like competing schools of fish.

This is largely because you are almost certain to re-write sections or prose or rename key terms to match what you are most comfortable with communicating. Meanwhile, ChatGPT moves on to the next few sections, possibly unaware of your renaming. This kind of drift in naming conventions within the piece is especially hard to spot as the author of the document; conversely, I think it often jumps out to the reader as unprofessional.

A forty-page contract that was using several different terms to communicate “start” and “finish”

Over the course of a few days, I had used ChatGPT to generate section after section of a large and important contract for our organization. While ChatGPT was extremely helpful in outlining and drafting and comparing the elements of this contract, it was far from perfect; each subsection needed to be drafted, re-written, verified, drafted again while communicating with several stakeholders.

By the time we approached completion of our first draft, I had a nagging feeling we had a problem. Our contract lifecycle was somewhat complicated and had evolved from what our vendor had communicated, to the point where we were addressing the same milestones in several different ways throughout the body of the document.

Of course, you can feed in a forty page document to ChatGPT (the paid version) and ask for an assessment. I tried that:

Four, could you please take a look through this document and let me know if there are any problems?

This will generally return some very high-level discussion of the overall contents of the document — perhaps your contract is missing an Indemnity clause, or your story is missing an ending. The prompt didn’t come close to identifying our dates issue, though.

Four, are we using different terms in the contract for when we expect the project to start and stop? We want to be consistent.

Now we are narrowing in on the problem. ChatGPT is very likely to communicate at this point a “make sure” response. (I don’t know if the “make sure” response is a documented AI phenomenon, but it should be!) Something like: “Yes, I see the Contract Start Date and Implementation Commencement being used interchangeably. Make sure to review your document to harmonize your terms and clear up any confusion for the reader.”

When you really need to get something done, use a table

Chat, this is the state of the entire contract (I am uploading it with this prompt. I am concerned that we are addressing the contract time periods inconsistently. We should always use "Contracted Period" to address the whole duration. Can you identify all the instances where time and durations are used in the draft IN A TABLE, indicating what needs to change (if anything) for that specific instance to be consistent?

Jackpot!

A table with the location of the reference, the actual reference, and a suggested fix. So, so helpful.

By asking for a table, and then by specifying what is in the table after we ask for the table, we not only get a more usable organizational tool, but we force ChatGPT into more concrete details about what precisely needs to be updated. By comparison, asking for the same information using the precise same prompt minus the words “in a table” returns a less-useful bulleted list, including a too-vague “General Observation: The contract and SOW make appropriate references to the defined time periods except in a few cases…”

Once we have our notes from the table taken care of, it’s straightforward to ask ChatGPT again to take a look at our lengthy document:

`All right, in this version I added adjusted a bunch of language and fixed a few timing terms. Do you think I have handled the terminology around dates and durations with consistency?`

Once again, a table in response — even without asking for one. ChatGPT is on the case now.

A table from ChatGPT confirming we’ve reached our goal.

This is an extremely useful tool from ChatGPT! We’ve accomplished what we set out to — fixing our contract language — and generated a far more professional document.

Zooming out to a wider view of working with ChatGPT, asking for tables in output can be commonly be a successful approach to not only generating content for external consumption, but for getting organized internally. Even if you’re just trying to organize the bit of chaos generated by working as a human-AI team.

Enterprising readers (the kind that might have made it this far into the piece) might wonder if this is a ChatGPT-specific approach, or whether other AI-based chat tools might benefit from the application of the same approach.

Without doing too broad a survey, I can report that at least one other tool — Claude Opus — responds well to a table demand.

Claude, this is the state of the entire contract (I am uploading it with this prompt. I am concerned that we are addressing the contract time periods inconsistently. We should always use "Contracted Period" to address the whole duration. Can you identify all the instances where time and durations are used in the draft, IN A TABLE, indicating what needs to change (if anything) for that specific instance to be consistent?

Claude Opus not only gives us a table, but looks great doing it.

However, in the case of Claude (at least for Opus — I did not check with the free Sonnet version since uploads are so limited), we don’t *need* to ask for a table — Claude is somewhat more terse and to the point than ChatGPT. Here is output from is the same prompt without the demand for a table:

Claude Opus getting to work with precision and grace.

I think the table still has its merits, but in the case of Claude, either approach seems to work.

In writing this article, I can imagine how a chess instructor might feel in authoring a piece about a particularly useful pawn sacrifice. Hopefully we’ve leveled up your AI “game” just a bit! Of course, if you have some similar experiences with ChatGPT or some of the other tools we didn’t try today, we’d be happy to hear from you in the comments.

--

--

Peter Dresslar
Hawai’i Center for AI

Exec Dir Hawai‘i Center for AI. Program Mgr. American Samoa Community College.