Close Menu
Gadget Guide News
  • Home
  • News
  • Features
  • Reviews
  • Best Stuff
  • Buying Guides
  • Deals

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending

How Twelve South tried to build a universal charger

June 3, 2025

Nvidia’s app now has a light mode if you want to be flash banged

June 3, 2025

The Verge’s guide to an easier move

June 3, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Gadget Guide News
Subscribe
  • Home
  • News
  • Features
  • Reviews
  • Best Stuff
  • Buying Guides
  • Deals
Gadget Guide News
  • Best Stuff
  • Buying Guides
  • Reviews
  • Deals
  • Features
Home»News»Why do lawyers keep using ChatGPT?
News

Why do lawyers keep using ChatGPT?

News RoomBy News RoomJune 1, 2025008 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Every few weeks, it seems like there’s a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, “bogus AI-generated research.” The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don’t exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven’t they stopped?

The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren’t necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don’t understand exactly what LLMs are or how they work. One attorney who was sanctioned in 2023 said he thought ChatGPT was a “super search engine.” It took submitting a filing with fake citations to reveal that it’s more like a random-phrase generator — one that could give you either correct information or convincingly phrased nonsense.

Andrew Perlman, the dean of Suffolk University Law School, argues many lawyers are using AI tools without incident, and the ones who get caught with fake citations are outliers. “I think that what we’re seeing now — although these problems of hallucination are real, and lawyers have to take it very seriously and be careful about it — doesn’t mean that these tools don’t have enormous possible benefits and use cases for the delivery of legal services,” Perlman said. Legal databases and research systems like Westlaw are incorporating AI services.

In fact, 63 percent of lawyers surveyed by Thomson Reuters in 2024 said they’ve used AI in the past, and 12 percent said they use it regularly. Respondents said they use AI to write summaries of case law and to research “case law, statutes, forms or sample language for orders.” The attorneys surveyed by Thomson Reuters see it as a time-saving tool, and half of those surveyed said “exploring the potential for implementing AI” at work is their highest priority. “The role of a good lawyer is as a ‘trusted advisor’ not as a producer of documents,” one respondent said.

But as plenty of recent examples have shown, the documents produced by AI aren’t always accurate, and in some cases aren’t real at all.

In one recent high-profile case, lawyers for journalist Tim Burke, who was arrested for publishing unaired Fox News footage in 2024, submitted a motion to dismiss the case against him on First Amendment grounds. After discovering that the filing included “significant misrepresentations and misquotations of supposedly pertinent case law and history,” Judge Kathryn Kimball Mizelle, of Florida’s middle district, ordered the motion to be stricken from the case record. Mizelle found nine hallucinations in the document, according to the Tampa Bay Times.

Mizelle ultimately let Burke’s lawyers, Mark Rasch and Michael Maddux, submit a new motion. In a separate filing explaining the mistakes, Rasch wrote that he “assumes sole and exclusive responsibility for these errors.” Rasch said he used the “deep research” feature on ChatGPT pro, which The Verge has previously tested with mixed results, as well as Westlaw’s AI feature.

Rasch isn’t alone. Lawyers representing Anthropic recently admitted to using the company’s Claude AI to help write an expert witness declaration submitted as part of the copyright infringement lawsuit brought against Anthropic by music publishers. That filing included a citation with an “inaccurate title and inaccurate authors.” Last December, misinformation expert Jeff Hancock admitted he used ChatGPT to help organize citations in a declaration he submitted in support of a Minnesota law regulating deepfake use. Hancock’s filing included “two citation errors, popularly referred to as ‘hallucinations,’” and incorrectly listed authors for another citation.

These documents do, in fact, matter — at least in the eyes of judges. In a recent case, a California judge presiding over a case against State Farm was initially swayed by arguments in a brief, only to find that the case law cited was completely made up. “I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” Judge Michael Wilner wrote.

Perlman said there are several less risky ways lawyers use generative AI in their work, including finding information in large tranches of discovery documents, reviewing briefs or filings, and brainstorming possible arguments or possible opposing views. “I think in almost every task, there are ways in which generative AI can be useful — not a substitute for lawyers’ judgment, not a substitute for the expertise that lawyers bring to the table, but in order to supplement what lawyers do and enable them to do their work better, faster, and cheaper,” Perlman said.

But like anyone using AI tools, lawyers who rely on them to help with legal research and writing need to be careful to check the work they produce, Perlman said. Part of the problem is that attorneys often find themselves short on time — an issue he says existed before LLMs came into the picture. “Even before the emergence of generative AI, lawyers would file documents with citations that didn’t really address the issue that they claimed to be addressing,” Perlman said. “It was just a different kind of problem. Sometimes when lawyers are rushed, they insert citations, they don’t properly check them; they don’t really see if the case has been overturned or overruled.” (That said, the cases do at least typically exist.)

Another, more insidious problem is the fact that attorneys — like others who use LLMs to help with research and writing — are too trusting of what AI produces. “I think many people are lulled into a sense of comfort with the output, because it appears at first glance to be so well crafted,” Perlman said.

Alexander Kolodin, an election lawyer and Republican state representative in Arizona, said he treats ChatGPT as a junior-level associate. He’s also used ChatGPT to help write legislation. In 2024, he included AI text in part of a bill on deepfakes, having the LLM provide the “baseline definition” of what deepfakes are and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin told The Guardian at the time. Kolodin said he “may have” discussed his use of ChatGPT with the bill’s main Democratic cosponsor but otherwise wanted it to be “an Easter egg” in the bill. The bill passed into law.

Kolodin — who was sanctioned by the Arizona State Bar in 2020 for his involvement in lawsuits challenging the result of the 2020 election — has also used ChatGPT to write first drafts of amendments, and told The Verge he uses it for legal research as well. To avoid the hallucination problem, he said, he just checks the citations to make sure they’re real.

“You don’t just typically send out a junior associate’s work product without checking the citations,” said Kolodin. “It’s not just machines that hallucinate; a junior associate could read the case wrong, it doesn’t really stand for the proposition cited anyway, whatever. You still have to cite-check it, but you have to do that with an associate anyway, unless they were pretty experienced.”

Kolodin said he uses both ChatGPT’s pro “deep research” tool and the LexisNexis AI tool. Like Westlaw, LexisNexis is a legal research tool primarily used by attorneys. Kolodin said that in his experience, it has a higher hallucination rate than ChatGPT, which he says has “gone down substantially over the past year.”

AI use among lawyers has become so prevalent that in 2024, the American Bar Association issued its first guidance on attorneys’ use of LLMs and other AI tools.

Lawyers who use AI tools “have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature” of generative AI, the opinion reads. The guidance advises lawyers to “acquire a general understanding of the benefits and risks of the GAI tools” they use — or, in other words, to not assume that an LLM is a “super search engine.” Attorneys should also weigh the confidentiality risks of inputting information relating to their cases into LLMs and consider whether to tell their clients about their use of LLMs and other AI tools, it states.

Perlman is bullish on lawyers’ use of AI. “I do think that generative AI is going to be the most impactful technology the legal profession has ever seen and that lawyers will be expected to use these tools in the future,” he said. “I think that at some point, we will stop worrying about the competence of lawyers who use these tools and start worrying about the competence of lawyers who don’t.”

Others, including one of the judges who sanctioned lawyers for submitting a filing full of AI-generated hallucinations, are more skeptical. “Even with recent advances,” Wilner wrote, “no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material.”

Read the full article here

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
News Room
  • Website

Related Posts

How Twelve South tried to build a universal charger

June 3, 2025

Nvidia’s app now has a light mode if you want to be flash banged

June 3, 2025

The Verge’s guide to an easier move

June 3, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Articles

Wi-Charge wireless power review: This Alfred door lock charges wirelessly — and I want everything in my smart home to work this way

May 16, 2025

The best cheap phones for 2025

May 2, 2025

We asked camera companies why their RAW formats are all different and confusing

April 4, 2025
Latest Reviews

Tested: Nvidia’s GeForce Now just breathed new life into my Steam Deck

News RoomMay 29, 2025

Surface Laptop 13-inch review: a little less for a little less

News RoomMay 28, 2025

BougeRV water heater review: hot showers to go

News RoomMay 24, 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Demo
Most Popular

Tesla’s full self-driving looks to be coming to Europe sooner than you’d expect

May 17, 2025

Wi-Charge wireless power review: This Alfred door lock charges wirelessly — and I want everything in my smart home to work this way

May 16, 2025

The best cheap phones for 2025

May 2, 2025
Our Picks

Bing lets you use OpenAI’s Sora video generator for free

June 3, 2025

The best microSD Express Cards for your Nintendo Switch 2

June 3, 2025

Sky Glass Air is an absolute steal – but here’s why it’s not for your living room

June 3, 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
2025 © Prices.com LLC. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.