Skip to Content

Proving AI's misrepresentations

What does the Air Canada's chatbot case tell us about AI liability?

Bad chatbot

For those keeping tabs on the bleeding edge between information technology and the law, there were two takeaways from the recent tribunal ruling on Air Canada's rogue chatbot.

First, when the rules finally get written on AI and corporate liability, they probably won't be written in small claims court.

Second, most people hate chatbots, so maybe the best publicity in such cases is no publicity.

"Would Air Canada have been better off merely settling with this guy?" asked Barry Sookman of McCarthy Tetrault, an intellectual property and tech law specialist. "We don't know that Air Canada didn't try."

Quick recap: Jake Moffatt was told by the airline's online chatbot that he could still receive a bereavement discount within 90 days after his flight. When he tried to claim reimbursement, Air Canada told him the bot's advice was wrong.

When the case went to small claims, the airline argued that the chatbot was a separate entity (even though it was part of the company's webpage) and it couldn't be held responsible for its faulty advice. The Civil Resolution Tribunal shot Air Canada down and ordered a partial refund.

Of course, the story has traveled far and wide online since then. Partly, that's because many lawyers were hoping it might resolve some of the unanswered questions about artificial intelligence and liability.

Partly, it's because some observers assumed the chatbot was AI-driven. It wasn't — an Air Canada spokesperson said it was "developed before generative AI capabilities … became available." The tribunal never heard evidence about the chatbot's nature, and Air Canada didn't discuss its programming in its defence. (This didn't stop the U.S.-based online news outlet The Hill from referring to Air Canada's "AI chatbot," but that's another issue.)

So for the lawyers who follow these things, the Air Canada chatbot case was an exercise in frustration. At best, it suggested a thought experiment about how a company might go about defending itself if one of its AI tools started talking nonsense to its customers.

"It's a case that touches on a lot of questions we've been asking ourselves about AI and corporate liability but only hints at answers," says Kirsten Thompson, a partner and national lead of Dentons' privacy and cybersecurity group.

If the case had been about a rogue AI chatbot (and if Air Canada had felt inclined to mount a more robust defence over a measly $483 refund), the airline might have named the company that developed the AI as a third party.

"The big unanswered question here is about the duty of care, and whether an AI company owes that to a consumer in the absence of a contract," says Meghan Bridges, a partner at Lenczner Slaght who works in commercial litigation.

"Say this was about an AI product and the company does some digging and comes back to court saying the AI product was defective. It names the AI developer as a third party so that if they lose, they can lay a claim against the company. Then the plaintiff would seek to add the AI company to the claim."

Would someone like Moffatt have a cause of action against an AI developer in the absence of a contract? A tort liability requires a duty of care between two parties. Would the developer owe one to a customer who never bought their product?

That's not a question Canadian courts have had an opportunity to ask or answer yet. But as more and more companies adopt customer-facing AI platforms, says Bridges, it's "only a matter of time before this issue arises."

An AI developer named as a third party could (and probably would) defend itself by arguing that the purchaser was at fault. AIs, like people, have to be trained to do their jobs.

"It's a little like a case of an employer suing a staffing agency for sending it someone who caused problems — the employer blames the agency for sending a problematic temp and the agency blames the employer's training," says Thompson.

The developer could argue the purchaser mishandled the training phase; a court would have to decide whether or how to assign liability between developer and purchaser.

"If you make an AI chatbot available to your customers, if you train it and test it, and something goes wrong, where does the negligence lie?" Sookman says. "The courts have said companies are responsible for the actions of their pre-programmed machines. Would that apply in the case of an AI? Would following established standards negate negligence liability?"

Thompson says these questions tend to crop up in law whenever a transformative new technology emerges. "At first, buyers don't really understand what they're buying," she says. "Over time, companies push back against the suppliers of the tech and the issue enters the customer's terms of service.

"There's usually something in the terms of service on websites about how something isn't meant to be taken as legal advice, for example. But it remains to be seen how the courts will deal with that."

Sookman says it also remains to be seen whether consumer protection laws would apply to a case of a consumer being misled by a wonky AI platform.

"Sometimes AI systems hallucinate — they make mistakes," he says. "If the customer knew they were dealing with an AI, it goes to reliance, which you have to prove in order to prove negligent misrepresentation. Is it reasonable to rely on the advice of an AI when you know they can be unreliable? There are an awful lot of 'ifs' in that question.

"The terms of consumer protection laws that apply to sales of products won't always apply to a service. So it's possible the customer wouldn't be able to claim negligence. But that's a question that still hasn't been addressed."

In short, as far as AI and legal liability is concerned, we're all still stuck on hold — waiting to talk to a human being.