Member-only story
Featured
The Fate of the Lawyer Who Cited Fake Cases from ChatGPT
It says a lot about how society (and the law) will treat AI’s mistakes
When attorney Thad Guyer filed a legal brief in an employment law case, it seemed like the kind of thing a lawyer would do thousands of times over their career.
Only this time was different. Guyer was about to land himself in hot water and risk his entire career.
Guyer had used ChatGPT to help him and his client prepare the brief. Not understanding how these models work, he had given the brief a cursory read-through, but hadn’t thoroughly checked it.
The brief cited a variety of legal cases. It turned out that many of them were misquoted or totally irrelevant — they were blatant hallucinations dreamed up by the LLM.
The response was immediate and incredibly serious. Guyer faced sanctions from the court, as well as the possibility of being disbarred and unable to practice law.
Now, we’ve finally learned about Guyer’s fate. A federal judge, Thomas Cullen, reviewed the case and decided whether sanctions should proceed.