Why some lawyers are turning off the internet to use AI
When you're unsure commercial platforms are private enough, running your own model may be the answer
Lawyers at firms large and small, in every field across the country, are encountering the same problem. They’ve begun to incorporate AI into their personal lives, discovering its benefits for things like searching and summarizing documents and information, and they want to make use of this powerful technology in their practice.
But they don’t have access to an AI platform they can trust with client information or Crown disclosure. They’re not even sure that a paid service like Protégé, from Lexis Nexis, would be safe to use.
We all know—or at least we should—that you can’t upload sensitive client data to ChatGPT or other large commercial models not designed for law.
But aside from this, what can you do?
Do law societies condone the use of client files on Lexis or Westlaw’s AI? Does the Crown—provincial or federal—permit defence counsel to upload disclosure to proprietary AI? What about other government agencies?
In the face of this uncertainty, many lawyers I’ve spoken to—Crown, defence, and private counsel—simply abstain from using generative AI on their material.
This means foregoing the use of AI for a host of things that would be enormously beneficial, such as locating something buried in hundreds of pages of disclosure, summarizing witness statements and reports, organizing exhibits, and more.
A lawyer should still personally review disclosure. But AI could be indispensable for making use of it once it has been reviewed.
Neither law societies nor the Crown rule out the use of AI for these purposes, and tools like Protégé may be a solution. But if you’re still unsure, there’s a nuclear option that courts and governments are turning to, one every lawyer has within reach: running your own model, offline.
To reach this point, it is helpful to understand what we can and can’t do, why hosting your own AI may be the best option, and what this entails.
Law society directives and Crown policy on privacy and AI
Most of Canada’s law societies have issued directives on the use of client material with AI, and they all draw the same lines. First, tell the client about using AI and get their informed consent. However, don’t use it on ChatGPT or other platforms without robust privacy protections, or where you can’t be certain that the data won’t be used for model training.
Beyond this, however, all of the directives I’ve come across, including those from British Columbia, Alberta, and Ontario, suggest that “internal or proprietary AI platforms” with strong security protections could be acceptable so long as counsel exercises “extreme caution.”
What defence counsel can do with Crown disclosure is less clear. The boilerplate in B.C. from both the federal and provincial Crown is silent on AI. Defence lawyers are only told they must “keep the material in a secure fashion,” and not copy or “provide access” to anyone not under their supervision.
Is proprietary AI private enough?
Of course, nobody will go on record to say that Protégé’s privacy policy or Westlaw’s CoCounsel meet these standards — and I’m certainly not saying that here!
But Lexis, for example, does seek to assure users that its platform is reliably private. The company says it doesn’t use customer data to train AI models, encrypts user data in transit and at rest, and purges conversations “after 90 days or until the user deletes (whichever occurs first).” Documents are also deleted after 10 minutes of inactivity or by deleting a “session conversation thread.”
A third-party audit of Protégé’s security is conducted annually, yet questions remain. Does “no data is used for training” mean only core models or all models? Do “session conversation threads” include everything uploaded? How is deletion verified? And where is the data stored in the meantime?
Government isn’t taking chances
The federal government’s AI policy casts doubt on whether Protégé or any similar product would be considered private enough. Public servants may only upload material into an AI system “controlled or configured by the government when the appropriate privacy and security controls are in place.”
And if an AI service stores data outside Canada, which most do, government lawyers face a further hurdle—under federal and provincial privacy law—in having to conduct assessments or obtain assurances about confidentiality.
In light of all of this, the solution for both prosecutors and courts is becoming clear: build and manage their own AI.
The Commissioner for Federal Judicial Affairs has set out a roadmap that points in this direction. I’ve heard anecdotally about projects underway at courts and Crown offices in two provinces to do the same.
The nuclear option: running your own model
Lawyers who want to use AI on client material or Crown disclosure and be close to certain it will remain private and secure face the same choice: running a language model on their own computer, offline.
Like most things, there are pros and cons to this.
The pros: It’s easy and free to try. You can be up and running in 20 minutes with a chatbot that reads documents, summarizes, pinpoints, re-drafts, and more, leaving no trace anywhere but on your hard drive. And once deleted, it’s gone.
The cons: It’s not as good as the AI you’ve become accustomed to. However, the better your machine, the larger the model you can run and the more it can accomplish.
I’ve begun experimenting with this by downloading LM Studio on my four-year-old MacBook Pro and running various open-source models (with ChatGPT walking me through the setup). My computer can’t run models large enough to perform optical character recognition (OCR), so it can’t read scanned documents, which is most Crown disclosure. However, it can read text files and perform basic tasks with them. It doesn’t quite do this well enough to provide a real solution, but it’s promising enough to make me want a newer, faster computer to accomplish more.
I convey all of this not as advice but to point to where things are headed. When I upgrade my laptop in a year or two, I’ll double or triple the RAM to run a much bigger model with OCR. I estimate I’ll need to spend roughly $3,000 to get there.
Bottom line: You may decide that a tool like Protégé is reliable enough. But if not, there’s another option that gives you full control over your data. It won’t be perfect, and you may need to invest in better hardware. But it could be effective and worthwhile.