Tuesday, November 5, 2024

Attorney-Client Privilege at Risk from Use of Generative AI

Share

One of generative AI’s most powerful features—the ability to learn from the questions humans ask—make it a minefield for attorneys trying to use the technology while protecting confidential and privileged client information.

Privilege covers certain confidential communications between attorney and client when the client is seeking legal advice. Privilege can be waived if information is disclosed to an outside party—meaning the communication might not be protected from discovery by the other side during litigation, for example.

Public-facing generative AI models, like ChatGPT’s free version, 3.5, pose a tangible threat to confidential information: The models could repeat information in one user’s query to the next user asking about something similar.

“The problem is, the next person that comes along might be your opposing counsel,” said Steve Delchin, a senior attorney at Squire Patton Boggs. A canny opposing counsel might even be able to wheedle the information out of the model.

Confidentiality “is the number one duty that is implicated when lawyers use generative AI,” Delchin said.

But the trouble doesn’t begin when confidential information ends up in the wrong hands, lawyers said. Merely the risk that it could be seen by a third party means putting that information into public-facing tools may be a legal ethical breach. It’s as if the lawyer has left sensitive documents on a park bench: It’s possible no one will find them, but the act of leaving them in public is a problem.

Caution about these risks has filtered to the top of the American legal system. Chief Justice John Roberts raised the point in his year-end report on the federal judiciary, writing that “some legal scholars have raised concerns about whether entering confidential information into an AI tool might compromise later attempts to invoke legal privileges.”

‘Smaller walls’

AI developers have recognized the need for businesses, including law firms, to wall off their data. They offer enterprise models that train only on a business’s data, and the inputs don’t feed back out to the public models. But those are not a guaranteed solution to confidentiality and privilege risks.

For example, a law firm might set up a generative AI model that trains off data and queries generated only by the firm and keeps that information sealed within the firm’s model instead of feeding back to the ocean of information the public-facing models use.

It’s possible that allowing anyone within a firm to access information could be a waiver of attorney-client privilege, said Nick Peterson, of counsel at Wiley.

A law firm using its own model, trained on its own data, to draft a brief that’s intended to be public-facing shouldn’t be an issue, said James McPhillips, a partner in Clifford Chance’s global technology group.

But if an attorney asks the model to produce a clause in a contract, “theoretically, is there a concern that it’s going to produce for one client a contract provision that came from another client, and is that a problem?” he asked. “Maybe, maybe not.”

AI platform builders are already responding to this problem, said Megan Ma, assistant director of the Stanford Program in Law, Science, and Technology and the Stanford Center for Legal Informatics. OpenAI—the maker of Chat GPT—recently announced features in the enterprise model that allow teams within a company to wall off their data.

“It seems to me that there is a direction toward increasing customization, personalization, and being able to build smaller walls” around data, Ma said, which will help mitigate these risks. “An emerging practice is going to be, how far are you layering the way you store your data?”

Beware chatbots

Firms that set up client-facing chatbots on their website—for example, a personal injury firm using a chatbot instead of a human attorney to ask new clients for details of their cases—should be wary, said Ken Withers, deputy executive director of the Sedona Conference, a legal research institute.

If the firm’s chatbot isn’t sealed off and data typed into it feeds back into a larger public model, it could compromise confidentiality and privilege.

“There’s a real question as to whether it really is in confidence, or whether the parties have a reasonable expectation of privacy in that conversation,” he said.

The set-up can also raise a “bizarre question” about the technology’s sentience, Withers said: Is the chatbot “simply a mode of communication, or is the chatbot acting more like a party to the communication” by asking follow-up questions or even offering advice?

If the latter, the client may have had a conversation they expected to have been protected by attorney-client privilege—but they were speaking with a robot. Opposing counsel in discovery could request the chatbot transcript, arguing the AI isn’t an attorney, so the conversation isn’t covered by attorney-client privilege.

That argument isn’t certain to succeed, because privilege should apply if the client thought the conversation was privileged. But that won’t stop lawyers from trying to make the argument, Withers said.

Vetting vendors

Legal technology vendors regularly announce they’re baking more generative AI capabilities into their products—heightening confidentiality and privilege risks for lawyers, whose “duty of supervision” covers technology tools.

In conversations about AI and privilege, “I expect one of the biggest questions we’ll be seeing is: If you use a platform to do something, what controls do you have or what restrictions can you put on the platform so your data is segregated?” said Ron Hedges, a former US magistrate judge and member of the New York State Bar Association AI task force, and principal of Ronald J. Hedges LLC.

Attorneys need to question vendors before signing up, Delchin said: What type of confidential information is going into the tool? How is it stored? Who has access to that information? Does the company intend to gain an ownership interest in the data uploaded to its tool? What safeguards does the vendor have in place to preserve confidentiality? Where does liability fall if there’s a security incident? Will the firm have access to client data if it fires the AI vendor, or the company goes under? Will they use the client’s information in anonymous or aggregate form into their system to improve it?

Litigation anticipated

Questions about generative AI’s implications on legal work will likely be litigated, said McPhillips, the Clifford Chance partner. That will likely start in the next year or two, as more lawyers start using generative AI tools.

“We’ll have to look at a number of use cases, case-by-case examples, to help formulate guidance on how this works,” he said. “My personal opinion, or the Chief Justice’s personal opinion or any others, is really just a personal opinion at the moment until courts catch up and give their opinions on it.”

In the meantime, lawyers should avoid putting confidential information into public-facing models and inform clients when they’re using generative AI, he said.

Cloud computing, email, and even the telephone—all technologies that move attorney-client communications farther away from a face-to-face conversation—provoked lawyers’ concerns about confidentiality and privilege when they debuted. Those concerns have all been assuaged over time.

“As the legal profession has learned more about how AI works and the security around especially the commercial license versions of these, my expectation is that there should not be confidentiality or privilege concerns” that would be different than cloud email or similar technologies, McPhillips said.

Read more

Local News