Every few weeks, there is a new story about a lawyer facing trouble after using ChatGPT for legal research. Judges have called out these professionals for submitting documents with what they describe as “bogus AI-generated research.” The pattern is familiar: a lawyer under pressure turns to a large language model (LLM) like ChatGPT. It invents fake case law, and no one notices until a judge or an opposing lawyer points it out. In some situations, such as a 2023 aviation lawsuit, lawyers were fined for submitting these incorrect documents.
So why does this keep happening?
Much of it comes down to time. Many legal professionals are overwhelmed with cases, and AI is a quick way to manage the load. Legal tools like Westlaw and LexisNexis now have AI features built in. For busy lawyers, these tools can feel like a helpful research assistant. Most lawyers aren’t using ChatGPT to write entire documents, but many are turning to it for help with research and drafting summaries. The issue is that not all lawyers fully understand how AI works. One attorney fined in 2023 admitted he thought ChatGPT was just a better version of Google. He didn’t realise it could make up cases altogether.
Cases of AI mistakes in court are growing
In a recent case, lawyers representing journalist Tim Burke used ChatGPT’s “deep research” tool to write a legal filing. That document contained several false quotes and citations. Florida judge Kathryn Kimball Mizelle found nine fake references in the document and removed it from the court record. She allowed the lawyers to resubmit, and one of them, Mark Rasch, accepted full responsibility for the errors. He explained that he used ChatGPT and Westlaw’s AI tools for help.
This isn’t an isolated incident. Lawyers working for AI firm Anthropic admitted to using their own AI system, Claude, in a copyright case. The filing had an incorrect citation and listed the wrong authors. In another case, a Minnesota expert supporting a law on deepfakes used ChatGPT for help with citations, which led to two errors and one misattributed author.
These mistakes matter. In California, a judge nearly ruled favour of an argument based on fake case law. He later discovered the citations were entirely made up. “I read their brief, was persuaded by the authorities they cited, and looked them up—only to find they didn’t exist,” wrote Judge Michael Wilner.
Legal experts like Andrew Perlman, dean of Suffolk University Law School, say that while these cases are serious, they are not the whole picture. Many lawyers use AI tools successfully without issues. Perlman believes AI can help sort through large amounts of information, review documents, or brainstorm legal arguments. However, he warns that AI should not replace legal judgment. It’s meant to support lawyers, not do their jobs for them.
AI isn’t going away, so lawyers need to use it wisely
A 2024 survey by Thomson Reuters found that 63% of lawyers had used AI tools, and 12% used them regularly. Many said they rely on AI to summarise case law and find sample documents. Half of the lawyers surveyed said exploring how to use AI more in their work was a top priority. One respondent noted that a lawyer’s value is as a “trusted advisor,” not just someone who drafts documents.
Still, the risks are clear. Lawyers must always double-check the information AI gives them. Perlman points out that even before tools like ChatGPT came along, lawyers sometimes submitted incorrect citations due to time pressures. At least in those cases, the citations were real, even if irrelevant.
One of the bigger problems now is overtrust. The output from ChatGPT looks polished and confident, which can mislead users into believing it’s correct. Election lawyer and Arizona state representative Alexander Kolodin said he treats ChatGPT like a junior associate. He’s used it to draft laws and amendments but continuously checks the citations carefully. “You wouldn’t send out an associate’s work without checking it,” he said. “Same goes for AI.”
Kolodin uses both ChatGPT’s advanced tools and the AI features on LexisNexis. In his view, LexisNexis is more likely to create errors than ChatGPT, although both have improved.
The issue has become so important that in 2024, the American Bar Association (ABA) issued its first guidance on lawyers using AI. It stressed that lawyers must remain competent in technology and understand the benefits and risks of generative AI. The guidance also recommends lawyers think carefully about sharing sensitive case information with AI systems and whether they should tell their clients when AI is used.
Perlman believes AI will continue to change the legal world. “At some point, we’ll worry less about lawyers using AI and more about those who don’t,” he said. But others, like Judge Wilner, remain sceptical. He wrote, “No reasonably competent attorney should outsource research and writing to this technology without checking it first.”