Wednesday, November 27, 2024

After One Year of ChatGPT’s Debut, Judges Contemplate the Impact of GenAI Use

Share

Since ChatGPT’s public debut a year ago, lawyers and judges have been evaluating generative artificial intelligence’s impact not only on legal questions in copyright, employment, and other areas of law, but on legal proceedings themselves. As the legal community grapples with the technology’s uses, judges across the country have issued standing orders outlining how attorneys should use AI in filings, and courts are continuing to evaluate where the technology should fit in the practice of law. The US Court of Appeals for the Fifth Circuit recently said it was considering requiring attorneys to certify the accuracy of any generative intelligence materials filed with the court. But not all judges endorse standing orders or certification requirements as the most effective mechanism to curb generative AI misuse. Video: ChatGPT and Generative AI Are Hits! But Can Copyright Law Stop Them? A handful of judges issued a spate of standing orders clarifying how the technology could be used in their courtrooms after a lawyer submitted a filing with citations fabricated by ChatGPT in May 2023 in Mata v. Avianca Inc., a personal injury lawsuit against an airline. The plaintiff’s attorneys were fined $5,000 as sanctions. The mishap highlighted generative AI’s “hallucination” problem, when the technology presents incorrect information as fact, and the particular dangers it presents for legal practitioners.

Patchwork of Approaches in AI Standing Orders
Judge Brantley D. Starr, of the US District Court for the Northern District of Texas, was the first to issue an order on May 30 requiring attorneys certify the accuracy of their filings. Starr said he’d been drafting his order before news broke of the Avianca filing. A continuing education conference for judges in the Fifth Circuit alerted him to the technology’s bias problems and the possibility that generative AI might produce fake facts. But the Avianca incident made Starr realize he needed to make changes to his order. “I realized my flawed language in there would let anyone who uses ChatGPT to side check with ChatGPT, and that wasn’t really my intent,” Starr said. He changed his order to require generative AI users to check language used for filings against traditional databases. Starr said he viewed the order as a way to prevent a violation that merits a sanction.“I have the pound of cure in my ability to sanction lawyers. I would rather have the ounce of prevention so that’s my goal,” he said. He added that he thought his order had a shelf life as the technology improves and attorneys become more familiar with its capabilities.

More than a dozen other judges have followed with their own standing orders outlining generative AI use in their courtrooms. The orders range in scope from merely warning attorneys about generative AI’s pitfalls without prohibiting its use to requiring attorneys to disclose how generative AI was used, to prohibiting AI use outside of search engines. Most recently, the four judges who comprise the US District Court for the District of Hawaii on Nov. 14 issued a general order on the use of “unverified sources” in AI-generated filings that requires counsel to declare how the filing was drafted and that its accuracy has been confirmed. “The scope of the required declaration is that required by Rule 11,” the order says.

Former Judge Paul W. Grimm, who retired after 26 years on the bench of the US District Court for the District of Maryland, says this patchwork of generative AI orders isn’t as clear as it could be. “If a judge just sits down and, and shoots out an order, no one’s had a chance to look at that and say, wait a minute, judge, do you really need this?” Grimm said. “I just inherently prefer going to a local rules approach because that way you publish them and people have a chance to see them.”

Along with co-authors Maura Grossman and Daniel Brown, Grimm argued in a paper published in the most recent issue of Duke Law School’s journal Judicature that courts should adopt local rules. Such an approach would allow other judges and attorneys to comment on the rule which would reveal unintentional adverse consequences and better address scope issues, they say. “Individualized standing orders are unnecessary, create unintended confusion, impose unnecessary burden and cost, and deter the legitimate use of GenAI applications that could increase productivity and access to justice,” Grimm and his co-authors say in the article. But Katherine Forrest, who served as a judge on the US District Court for the Southern District of New York for seven years and is now a partner at Paul Weiss, says the lack of uniformity in the orders isn’t a problem. “Lawyers are used to looking at and grappling with a whole host of standing orders,” she said. Forrest says it’s important to look at the orders in the context of how different sectors were reacting to generative AI’s emergence overall. “All at once, there became a series of activities in the commercial sphere, the academic sphere, and the courts where there was an attempt to sort of freeze frame,” Forrest said. She noted generative AI use was brushing up against rules about unfair practices in a variety of sectors and the courts wanted litigants to understand the implications of confidentiality and accuracy issues on lawyers’ professional obligations. “The public has had to go through its own learning process where it has come to terms with what some of the accuracy issues can be,” Forrest said. Even though these tools could greatly impact legal practice and potentially widen access to justice, right now they’re “not ready for prime time” because of concerns about accuracy and confidentiality, Forrest said.

GenAI’s Data Privacy Problems
Confidentiality and data privacy problems were why Judge Stephen Alexander Vaden, of the US Court of International Trade, issued an order. His order asks litigants who have used generative AI to disclose which portions of text were generated with those tools and certify that confidential information wasn’t disclosed. “My goal wasn’t to be a Luddite. And my goal also was not to burden litigants before me with telling me or swearing that they didn’t use something,” Vaden said. “What I wanted was a notice regime and something that acted in a solemn function.”

Vaden said he wanted to warn litigants that entering confidential business information into an AI tool could implicate certain legal privileges. Given companies like Apple and Samsung have restricted employees’ generative AI use for work, Vaden said he feels vindicated by his decision to issue his order. Vaden said that in the months since issuing his order, he hasn’t received any notices of generative AI use in his courtroom but he believes that attorneys are following his rule. Though attorneys haven’t said they’re using AI in their filings, Vaden noted other judges, lawyers, and elected officials have privately thanked him for issuing the order and alerting them to potential issues with the technology. But some judges are skeptical that standing orders are the right way to delineate how generative AI should be used in the courtroom.

Orders May Not Be Best Tool
Judge Xavier Rodriguez, of the US District Court for the Western District of Texas, said judges shouldn’t issue these kinds of orders for a few reasons. In September, he wrote a comprehensive review of AI’s impact on the practice of law for the Sedona Conference, an educational nonprofit that aims to advance legal education.

Rodriguez said some of the orders have an anti-technology tone. Since judges are going to have to deal with AI in evidentiary and discovery issues issues, “we shouldn’t be sending out that signal,” he said. He also noted that a number of orders conflated generative AI with artificial intelligence, which is a much broader field.

“If we don’t fully understand what our orders really say, we probably shouldn’t be entering those kinds of orders,” Rodriguez said. He said that state bars and ethics committees were better suited for determining how generative AI should be used in legal practice. The Florida Bar issued a proposed advisory opinion on generative AI use on Nov. 13, and the California Bar approved guidelines for lawyers’ AI use on Nov. 16. The New Jersey and Texas state bars as well as the American Bar Association have created task forces studying how AI will impact the legal profession. Despite the technology’s current pitfalls, most of the judges expressed optimism about generative AI’s ability to expand access to justice…

Read more

Local News