Skip to Content

Disclosing the use of AI in law

Two Canadian courts have issued AI practice directives. But is it really necessary?

Chatgpt research

As generative artificial intelligence programs such as ChatGPT gain widespread attention, lawyers have tried using these programs as research tools, sometimes with disastrous outcomes. Two lawyers in the United States were recently fined for submitting ChatGPT-generated case citations in court that did not exist. In response to these challenges, superior courts in Manitoba and the Yukon have issued practice directives mandating that lawyers must disclose the use of AI tools in their submissions.

However, Amy Salyzyn, a law professor at the University of Ottawa, is concerned the directives are overly broad.

"It's wonderful to see courts engaging with these questions of technology and how technology might affect what happens in court cases, and thinking about best practices," Salyzyn says. But there are practical concerns that relate to the vagueness of the directives, she adds. "Artificial intelligence is a term of art—it's not a precise technical definition and people don't always agree on what falls under it. Even in some pretty common definitions, there are all sorts of technology that lawyers are using that would fall under the definition, whether it's commonly used legal research databases, grammar-correcting programs, and someone even used an example of using Google to look up a court address."

All of which can be described as forms of AI, meaning a plain reading of the directives would require lawyers to disclose their use. According to Salyzyn, it is also unclear what the lawyer is supposed to disclose — is the use of the tool itself, or how it works, or what functions it performs? "That gets a lot trickier because lawyers may not have that expertise and could get themselves into providing explanations that may not exactly align with what the technology is actually doing."

Canadian-trained lawyer Maroussia Lévesque, currently an S.J.D. candidate at Harvard Law School researching AI governance, says that disclosure is a good first step and that U.S. judges have not looked kindly upon reliance on AI-generated references.

"It's not too early to start thinking about how to address the advent of ChatGPT and other generative tools in the legal field," Lévesque says. "It may not be the place of judges to decide this A to Z, and that we need a broader conversation about the implications. But I think it is a wise first step for them to at least require disclosure so that both the opposing counsel and the judge can know what they're dealing with."

Lévesque agrees that disclosure of any tool that "relies on AI for the preparation of materials" or "legal research" is a bit broad, adding we need to differentiate between front-end legal technology and back-end.

"Back-end legal tech is anything from e-discovery to predictive analytics that help lawyers do their job, so it doesn't replace them, but it helps them," Lévesque says. "There are products on the market already which can give insight into a judge's past motion rate that will be one among many data points that a lawyer can use to make decisions about their strategy. These tools are already in use."

But Lévesque points out that ChatGPT and similar turnkey can effortlessly produce legal advice and references, by drafting a brief for example. Services like DoNotPay in the U.S. have ambitious plans to argue cases at the Supreme Court by feeding arguments into the earbud of a physically present "dummy lawyer." "This is their concern, and from that sense, the language is a little bit broad because it seems to catch both front-end and back-end uses of AI," says Lévesque.

Salyzyn still wonders what exactly the problem is that the court directives as supposed to be addressing.

"Is it the fake case problem that we've seen be a problem elsewhere?" Salyzyn asks. "If that is the concern, I would have to question why the directive is necessary. Lawyers already have a professional obligation to make sure that the information they submit to the court is not deceptive or misleading."

Moreover, the courts must do their due diligence into cited case law and confirm the lawyer is appropriately relying on it.

"A lawyer may use an AI tool to come up with a topic sentence in a factum or help to locate evidence in a transcript, or maybe generate a table for displaying evidence in a certain a manner, and it's not clear to me why this is the court's business," Salyzyn says.

"If there is such an alarm about fake cases, I wonder if a better approach is a notice to the profession that they are aware that this risk is out there, and potentially putting lawyers on notice that they won't accept using one of these tools an excuse to present these cases," Salyzyn says.

Lévesque suggests that the language of the directives be changed to include "legal research or submissions that consist in the practice of law," to recognize that the real problem is when the tools perform acts that only lawyers are supposed to perform.

But she adds that it's okay to err on the side of caution by putting the onus on lawyers to be fully transparent.

"You leave it up to the judge's discretion to separate the wheat from the chaff, and they will understand likely that it's okay for lawyers to use e-discovery," Lévesque says. "This doesn't prohibit the use of these tools; it just puts their use on the table because we don't fully understand the stakes. Judges will make informed, context-specific calls about the weight to give the submissions."