Friday, 19 December 2025
27.4 C
Singapore
29.2 C
Thailand
27.3 C
Indonesia
27.5 C
Philippines

Lawyers turn to AI tools to save time but at a cost

Due to time pressure and growing tech integration in legal work, lawyers continue to use AI tools like Chatgpt despite the risks of fake case law.

Every few weeks, there is a new story about a lawyer facing trouble after using ChatGPT for legal research. Judges have called out these professionals for submitting documents with what they describe as “bogus AI-generated research.” The pattern is familiar: a lawyer under pressure turns to a large language model (LLM) like ChatGPT. It invents fake case law, and no one notices until a judge or an opposing lawyer points it out. In some situations, such as a 2023 aviation lawsuit, lawyers were fined for submitting these incorrect documents.

So why does this keep happening?

Much of it comes down to time. Many legal professionals are overwhelmed with cases, and AI is a quick way to manage the load. Legal tools like Westlaw and LexisNexis now have AI features built in. For busy lawyers, these tools can feel like a helpful research assistant. Most lawyers aren’t using ChatGPT to write entire documents, but many are turning to it for help with research and drafting summaries. The issue is that not all lawyers fully understand how AI works. One attorney fined in 2023 admitted he thought ChatGPT was just a better version of Google. He didn’t realise it could make up cases altogether.

Cases of AI mistakes in court are growing

In a recent case, lawyers representing journalist Tim Burke used ChatGPT’s “deep research” tool to write a legal filing. That document contained several false quotes and citations. Florida judge Kathryn Kimball Mizelle found nine fake references in the document and removed it from the court record. She allowed the lawyers to resubmit, and one of them, Mark Rasch, accepted full responsibility for the errors. He explained that he used ChatGPT and Westlaw’s AI tools for help.

This isn’t an isolated incident. Lawyers working for AI firm Anthropic admitted to using their own AI system, Claude, in a copyright case. The filing had an incorrect citation and listed the wrong authors. In another case, a Minnesota expert supporting a law on deepfakes used ChatGPT for help with citations, which led to two errors and one misattributed author.

These mistakes matter. In California, a judge nearly ruled favour of an argument based on fake case law. He later discovered the citations were entirely made up. “I read their brief, was persuaded by the authorities they cited, and looked them up—only to find they didn’t exist,” wrote Judge Michael Wilner.

Legal experts like Andrew Perlman, dean of Suffolk University Law School, say that while these cases are serious, they are not the whole picture. Many lawyers use AI tools successfully without issues. Perlman believes AI can help sort through large amounts of information, review documents, or brainstorm legal arguments. However, he warns that AI should not replace legal judgment. It’s meant to support lawyers, not do their jobs for them.

AI isn’t going away, so lawyers need to use it wisely

A 2024 survey by Thomson Reuters found that 63% of lawyers had used AI tools, and 12% used them regularly. Many said they rely on AI to summarise case law and find sample documents. Half of the lawyers surveyed said exploring how to use AI more in their work was a top priority. One respondent noted that a lawyer’s value is as a “trusted advisor,” not just someone who drafts documents.

Still, the risks are clear. Lawyers must always double-check the information AI gives them. Perlman points out that even before tools like ChatGPT came along, lawyers sometimes submitted incorrect citations due to time pressures. At least in those cases, the citations were real, even if irrelevant.

One of the bigger problems now is overtrust. The output from ChatGPT looks polished and confident, which can mislead users into believing it’s correct. Election lawyer and Arizona state representative Alexander Kolodin said he treats ChatGPT like a junior associate. He’s used it to draft laws and amendments but continuously checks the citations carefully. “You wouldn’t send out an associate’s work without checking it,” he said. “Same goes for AI.”

Kolodin uses both ChatGPT’s advanced tools and the AI features on LexisNexis. In his view, LexisNexis is more likely to create errors than ChatGPT, although both have improved.

The issue has become so important that in 2024, the American Bar Association (ABA) issued its first guidance on lawyers using AI. It stressed that lawyers must remain competent in technology and understand the benefits and risks of generative AI. The guidance also recommends lawyers think carefully about sharing sensitive case information with AI systems and whether they should tell their clients when AI is used.

Perlman believes AI will continue to change the legal world. “At some point, we’ll worry less about lawyers using AI and more about those who don’t,” he said. But others, like Judge Wilner, remain sceptical. He wrote, “No reasonably competent attorney should outsource research and writing to this technology without checking it first.”

Hot this week

Zoom introduces AI Companion 3.0 with a web-based assistant and expanded task automation

Zoom launches AI Companion 3.0, adding a web-based assistant that automates tasks, drafts emails and reshapes the platform into an AI workspace.

University of Nottingham Malaysia deploys Agentforce to streamline the student application journey

University of Nottingham Malaysia adopts Salesforce Agentforce to provide 24/7 AI-powered support for prospective students during admissions.

Deel becomes Arsenal’s official HR platform partner in multi-year global deal

Deel signs a multi-year global partnership with Arsenal, becoming the club’s Official HR Platform Partner and supporting its global operations.

Plaud Note Pro launches in Singapore as AI-powered note-taking device

Plaud launches the Note Pro in Singapore, introducing a slim AI note-taker with real-time human-AI alignment and up to 50 hours of recording.

Cybersecurity threats and AI disruptions top concerns for IT leaders in 2026, Veeam survey finds

Veeam survey finds cybersecurity and AI risks dominate IT leaders’ concerns for 2026, with data resilience and sovereignty rising in priority.

The rise of agentic AI and what it means for enterprise leaders

Agentic AI is accelerating across Asia, pushing leaders to rethink productivity, governance, and the infrastructure needed for long-term competitiveness.

Apple explores iPhone-class chip for future MacBook, leaks suggest

Leaked Apple files hint at testing a MacBook powered by an iPhone-class chip, suggesting a possible lower-cost laptop in the future.

Delta Electronics Singapore signs MOU with NUS to advance sustainable data centre innovation

Delta Electronics Singapore and NUS partner to develop sustainable, AI-ready data centre technologies for tropical environments.

Zoom introduces AI Companion 3.0 with a web-based assistant and expanded task automation

Zoom launches AI Companion 3.0, adding a web-based assistant that automates tasks, drafts emails and reshapes the platform into an AI workspace.

Related Articles

Popular Categories