Passer au contenu

Do not delegate the thinking

What a judge sees when counsel outsources their professional judgment to a machine

Justice Steven Hinkley
Justice Steven Hinkley

I have seen something happen, and I will describe it in general terms because the specifics matter less than the pattern.

A lawyer submits a brief. The research looks thorough. The citations are formatted correctly. The argument flows almost too smoothly. And then I check a case reference, and it does not exist. It’s not a typo or a parallel citation problem; the case was never decided. The court it supposedly came from has no record of it. The lawyer has cited a ghost.

This is not a hypothetical drawn from an American headline. This is happening in Canadian courts. And from where I sit in Alberta, the question is no longer whether artificial intelligence will change legal practice; it’s whether the profession will take seriously what it owes the court when it uses these tools.

The tool isn’t the problem

I should be clear about where I stand. I am not against AI. I have written about its potential in the Canadian justice system and have taught sessions on its responsible use. I believe it can make law more accessible, more efficient, and in some ways more fair. The technology itself is not the problem.

The problem is what happens when a lawyer treats AI-generated outputs the way they would treat a memo from a trusted senior associate. They read it, and if it sounds right, they file it. They skip verification and hand over their professional judgment to a tool that has no professional judgment of its own.

A large language model does not know what a case stands for, nor does it understand ratio. It also doesn’t distinguish between a binding authority and an obiter comment from a lower court in another jurisdiction. What it does, and does well, is produce text which looks and reads like competent legal writing. The surface is polished. Underneath, there may be no substance at all.

Supervision is not optional

Every lawyer in Alberta owes a duty of competence. The Rules of Professional Conduct are explicit: a lawyer must perform work to the standard of a competent lawyer. Using a research tool does not change this obligation. It does not matter whether the tool is Westlaw, a summer student, or ChatGPT. The lawyer who signs the brief is responsible for everything in it.

I call this the duty of supervision, and I think it deserves more attention than it is getting.

When a lawyer delegates research to a junior, they review the work, check the citations, and read the cases. They satisfy themselves that the analysis holds up. No competent lawyer would file a memo from an articling student without reading it first. Lawyers must hold AI to the same standard, and in practice to a higher one, because the failure mode is different. A junior who cannot find a case on point will say so. A language model will invent one and present it with confidence.

What I expect from counsel

From the bench, I don’t need to know every detail of how a brief was prepared, but I do need to know I can rely on what is in front of me. When a citation turns out to be fabricated, the damage goes beyond that single case. It slows the proceeding, erodes the trust between bench and bar, and creates a record problem. It also raises a question I should not have to ask: Did counsel actually read this before they filed it?

Here is what I expect. Every case cited in a submission should exist. Every proposition it is cited for should be accurate. If AI were used in preparing the brief, I want counsel to be able to honestly say they verified the output. Not by assuming it was correct or because it looked right, but by checking.

This is not an unreasonable expectation. It is the same expectation the profession has always had. What has changed is the source of the error.

The access to justice dimension

This matters to me because of where I sit. Grande Prairie serves a large geographic area, and many of the people who come before my court do not have lawyers. While some work with duty counsel, others are self-represented and navigating a system they have never encountered before.

AI could be transformative for these litigants. Plain-language tools can help a self-represented party understand the court process. Document assembly can make it possible to file materials without a lawyer. Translation and summarization tools can bridge the gap between legal and everyday language. I want these tools to exist and to work well.

But risk runs in this direction too. If AI tools produce unreliable legal information and people rely on it without knowing how to check, the harm falls hardest on those who can least afford it. A lawyer who files a hallucinated citation gets a stern conversation and potentially an adverse costs award. The duty of supervision catches the error, or at least assigns responsibility for it.

A self-represented litigant has no such duty and no infrastructure behind them. They have no professional obligation to verify what the tool produced, no training to spot a fabricated case citation, and no insurer if things go wrong. They trust the tool because they have nothing else, and if the tool is wrong, they bear the consequences alone. They lose the case, their home, or custody of their children.

The profession’s duty of supervision matters. But it only governs the profession. For the people most likely to turn to AI tools, and most vulnerable to their failures, no equivalent safeguard exists. That gap is where the real work needs to happen.

What comes next

Courts across Canada are beginning to address this. Some have introduced AI disclosure requirements, while others are developing practice directions. The Canadian Judicial Council is considering guidance. The CBA’s 2026 resolution on AI’s impact on the legal profession is a welcome step.

The rules we need are not complicated. Verify what you file. Disclose what you used. Understand the limits of the tool. Do not delegate the thinking.

None of this should feel new. The duty of competence has always required lawyers to take responsibility for their work product. AI does not create a new obligation. It reveals whether lawyers were taking the old one seriously.