ChatGPT May Tell You What You Want to Hear, Not What is True, But There’s More to the Story

Jennifer Marsh
5 min readMay 29, 2023

--

When I was an associate, I was never great at persuasive writing. A partner would spout off what they wanted to argue, and I didn’t always think that argument was well supported by the law. I tended to be overly truthful or literal and took few liberties with what I believed to be the true legal state of things. I would often think to myself, do you want me to write what you want to argue, or do you want me to write the truth (as persuasively as possible)?

I was reminded of this thought this past weekend as I saw all the Twitter posts on ChatGPT, because ChatGPT can be a little like that partner in that it will spout off the best argument it can, whether it is grounded in the truth or not. But, there is much more to the story. And to truly understand generative artificial intelligence (AI) and how it can fit into legal research products, you need to first understand how legal research products work now, how that may color your expectations, how generative AI works, and how both can or will work in combination.

Lawyers Already Use AI, Whether They Know It or Not.

Online legal research products have existed my entire legal career, and I graduated from law school in 1999. While as an attorney, you might think of these products as one part of your process, the products themselves are made up of lots of data and content, which itself is subject to many underlying processes and subprocesses. These processes extract, classify, summarize, etc., so you, as a user, can better find what you need. Many of these underlying processes started out as manual human processes, but over the years, more and more have become either automated (with code) or handled by machine learning. Thus, if you understand that machine learning is a type of “AI,” you have already been using products leveraging “AI” for some time, whether you realized it or not.

In addition to the underlying processes preparing the data and content for the product, the whole product itself is navigable and searchable to better fit into YOUR process and how you want to use it. And, that search engine has also gone through many changes over the years. You once probably only used Boolean or terms and connectors searching, but now you may use more natural language searching to find that needle in a haystack. Again, if you have marveled about how much better search engines have gotten over the years, you have been using “AI” for some time, whether you were aware of it or not.

Your Expectations Are Based on Current Legal Research Products, but ChatGPT is NOT a Legal Research Product.

How research products work may not be perceptible to you, but it might nevertheless be coloring your expectations. When you use a legal research product, you search for a case that says X. You craft your search, either Boolean or natural language, and see the results. And, when you do so, you can tell whether the search engine found what you were looking for or not. This is the electronic version of asking an associate to go to the library and find a case that says X or write a memo explaining a legal principle to you; undoubtedly, you expect that the associate or the product will tell you what you need to know truthfully.

Something similar may also happen when you use a citator or brief analyzer product. You want to know whether a particular case or cases are good law (or that the case exists and what it says). In the past, you might have sent an associate to the library to find, verify, and Shepardize these cases for you. Now, you expect that technology will do that far more efficiently (but also correctly).

Thus, your current experience with legal research products is that you can use them to find what you need to know, both efficiently and correctly. You may believe that some products do a better job than others, but your expectation is probably that the product will provide you with the truthful content, data, and information you seek. This is all good, except you cannot impart those expectations onto ChatGPT.

Generative AI Works Differently and Should Not Be Subject to the Same Expectations.

Generative AI, and the chatbot you may have played around with known as ChatGPT, do not work like the legal research products to which you have now become accustomed. These products do not retrieve the “right” answer for you, they write what it “thinks” you want it to say. This is much closer to a partner pressuring an associate to write a certain argument, whether the law supports it or not, except that ChatGPT will not have a crisis of conscience about it.

Generative AI is literal and predictive. It will literally try to predict what you are asking it to say whether that is grounded in the truth or not. And ChatGPT is the most basic form of this tool. Thus, if you ask it to summarize case A v. B, and it does not know that case, it may summarize it anyway based on what it predicts you want to hear (or write).

Generative AI Has a Place in Legal Research, But Maybe Not How You Think.

Now, while I would strongly caution against using ChatGPT for legal research directly, the same cannot be said for legal research products that will leverage generative AI as one of their many processes and subprocesses because those products can provide context and rules where ChatGPT has none. The generative AI process will be just one piece of a complicated puzzle, not the whole puzzle in and of itself.

So, think about it this way, while you may not be able to trust ChatGPT to give you a summary of case A v. B, you hopefully can trust a product using generative AI to summarize a case to which it has access. I have been playing around with the OpenAI API over the last couple of months, and case summarization is one task I experimented with. In doing so, I never asked the API to summarize that case without first passing the case to the API and giving it rules (in my prompt) about how I wanted it summarized because I know that generative AI is good at summarizing if it knows what and how to summarize it but can’t necessarily retrieve the case first on its own.

Don’t Trust Blindly, But Be Cautiously Optimistic

Attorneys and legal professionals want legal research products to help them with what they need to know (or find) and in time, will also rely on these products for what they want to say. But it is imperative you understand this distinction. It is not about distrusting “AI” — a broad term encompassing many things including machine learning, natural language processing, AND generative AI — or thinking that ChatGPT is the only way to interact with generative AI.

Instead, when you evaluate these products, either now or in the future, think about: how is generative AI used within this product? What kind of data was this generative AI trained on? Is the AI provided data for context (and how)? What sort of rules are provided to the generative AI to keep it within bounds? If we think about generative AI in this context, we will learn to leverage the best of what this technology has to offer, while also understanding how any risks associated with that technology have been mitigated. And that is the end of the story, for now…

P.S. Always check your cases before submitting a filing to the court whether that case was provided to you by an associate or an online tool!

--

--

Jennifer Marsh

Legal Tech product, analytics, and operations leader with 16+ years of experience in AI, data, and machine learning. Former practicing IP attorney.