Skip to Content

The missing defendant in Canada's fictitious case law crisis

The legal system has sanctioned lawyers for filing fake cases and penalized self-represented litigants for relying on them. But it hasn’t asked a single question of the companies whose products fabricate law and profit from the result.

AI search prompt
Krot Studio (iStock)

Canadian courts and tribunals face a fictitious case law problem driven by AI, and they have spent the past two years doing something about it.

Judges have flagged fictitious citations from the bench. Courts have issued practice directions requiring parties to disclose when they use artificial intelligence. Lawyers who submit fake cases have faced costs orders and fines. Self-represented litigants have had claims dismissed over non-existent citations. The legal profession has hosted panels, published guides, and sounded the alarm, over and over again.

And the problem is getting worse.

What the data shows

Between January 1, 2024, and April 13, 2026, Canadian courts and tribunals flagged at least 249 fictitious case citations in 138 decisions, spanning 45 different courts and tribunals nationwide. In 102 (74 per cent) of those decisions, courts found or presumed that AI tools had generated the fake cases. The trajectory tells its own story: seven decisions in 2024, 80 in 2025, and 51 to date in 2026. Every institutional response so far has targeted the humans in the system. None of it has slowed the curve.

The system has put everyone on notice, except the AI companies whose products created the problem in the first place.

How AI fabricates law

Most publicly available generative AI tools do not search a legal database when a user asks for case law. Why? Because they have no access to legal databases, or the databases have built safeguards to block them.

Instead, they predict what a case citation should look like, based on patterns in their training data. The tool assembles a court name, a year, a citation format, and a legal proposition that sounds right. The result looks like a real case and reads like one, but it’s not, as no one ever decided it.

Here is a thought experiment. Ask yourself: what is the leading case on spousal support in Ontario? If you do not know the answer and cannot check, what is the most likely citation? Probably something like Smith v. Smith, 2018 ONSC 1234. Smith is a very common surname in Canada. ONSC produces more decisions than any other court. You just built a plausible-sounding case out of pattern recognition, not knowledge, and that is exactly what generative AI does.

The difference is that you know you guessed. The AI does not. It does not flag uncertainty. It does not warn the user that the citation might not exist. If the user pushes back, the tool apologizes and generates something else with the same assurance.

Now consider who is using these tools. In 113 (82 per cent) of the 138 decisions in the dataset, the person who submitted fictitious cases was navigating the legal system without a lawyer. Many of these people turned to AI for the same reason they represent themselves: they cannot afford professional help. For someone without legal training, a confident and organized list of case citations from a chatbot carries real authority. There is no obvious signal that anything has gone wrong. And how can they even tell? Unlike a lawyer, who might recognize a suspicious citation on instinct, a self-represented litigant has no frame of reference to question what the tool produces.

AI companies know this happens. Their response has been a small disclaimer at the bottom of the screen: "AI can make mistakes." That is the digital equivalent of fine print on a product that is injecting fabricated law into a justice system.

The problem is escalating

The data captures only what judges and adjudicators catch and write about. Fictitious citations that slip past the bench, past opposing counsel, and past the parties themselves never show up in this research. The 249 figure marks the start of the count, not its end.

And the problem may no longer be limited to parties’ submissions. In March 2026, La Presse reported on a Quebec Superior Court decision that appears to contain fictitious case citations in the court’s reasons. No one has definitively established whether AI was responsible. But the implication is hard to ignore: if AI-generated fabrications can reach a court's written decisions, the problem has outgrown the safety nets the system has built to contain it.

The missing defendant

The legal system has sanctioned lawyers for filing fake cases. It has penalized self-represented litigants who relied on AI without knowing what it would produce. It has asked judges to catch what the parties missed. It has issued rule after rule, directive after directive, all aimed at the humans in the chain.

It has not, as far as anyone can tell, asked a single question of the companies whose products fabricate law, ship it to millions of users without meaningful safeguards, and profit from the result.

At some point, the system has to stop asking humans to catch the fictitious cases the machines create. We have to start asking the makers of those machines why they keep creating them.