With the continued improvements in artificial intelligence, people will frequently ask me if I’m worried about being replaced by a computer. The answer is yes. Today’s AI can produce incredible stuff. It’s no giant leap to think that people may use it to create their own will – without having to endure corny jokes.
Someday lawyers may be replaced by computers. But not just yet.
One of the first classes you take in law school is ‘Legal Research and Writing,’ which teaches you how to research and cite cases. Our professor first taught us to research strictly using books. If we started using computers, he explained, we’d “never learn how to do it for real.”
He was right. It took a fraction of the time to find cases using a computer. Even so, our professor warned us to always double-check our findings with the books.
My niece just finished her second year of law school, where she now exclusively learns how to research on the computer. But, as my professor would point out, can you trust a computer?
Apparently not. Peter LoDuca is an attorney with Levidow, Levidow & Oberman, P.C., a personal injury firm in New York City. He was retained to sue Avianca Airlines, Columbia’s biggest airline, for negligence by Robert Mata, who was allegedly injured when a metal serving cart struck his knee on a flight from El Salvador to New York City.
After the airline asked to dismiss the suit because the statute of limitations had expired, LoDuca enlisted the help of his partner, Steven A. Schwartz, to draft a brief to support their position. Instead of hitting the books like I would, or researching online like my niece, Schwartz instead used ChatGPT.
After he found several cases that were right on point and supported his client’s position, Schwartz then asked ChatGPT, “what is your source”? The program indicated the cases were real and could be found in online researchers Westlaw and LexisNexis.
Shortly after Schwartz filed his brief, however, attorneys for the airline asserted that the cases Schwartz cited were made up. In one example, a nonexistent case called Varghese v. China Southern Airlines Co., Ltd., the chatbot appeared to reference another real case, Zicherman v. Korean Air Lines Co., Ltd., but the details of the case were wrong, as was its date – the AI claimed it was decided in 2008, when it was actually a 1996 decision.
In an affidavit, Schwartz admitted that he had used OpenAI’s chatbot for his research. He indicated that he was “unaware of the possibility that its content could be false.” He now “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
“The court is presented with an unprecedented circumstance,” said Judge P. Kevin Castel, who is overseeing the dispute. “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
Because Schwartz isn’t admitted to practice in the Southern District of New York, LoDuca is the attorney of record on the case. Accordingly, both attorneys will have to appear before Castel to face possible sanctions. Like a high school student whose lab partner got caught cheating, LoDuca, in his affidavit, asserts he was not involved in the malfeasance and “had no reason to doubt the authenticity of the case law” fabricated in the document. But, he says he’s worked with Schwartz for 25 years and never recalled him looking to “mislead” a court.
Maybe Castel will throw the book at Schwartz. And suggest he use it.