Skip to Content

Flagging harmful content

Just how is the government proposing to tackle it?

Harmful content concept illustration

On the final day of the spring sitting of the House of Commons, the Liberal government released a consultation document related to their plans to tackle online hate. At the same time, they also tabled Bill C-36, which proposed to re-establish Section 13 of the Canadian Human Rights Act, while also creating new criminal offences around hate propaganda, and add new members to the Canadian Human Rights Tribunal. While the intention had originally been to introduce both in a single piece of legislation, the government chose instead to separate them, no doubt because of the controversy that the consultation would draw.

A lot will depend on the outcome of the election, but the consultation document proposes the creation of a Digital Safety Commission of Canada to support three bodies that would be responsible for dealing with harmful online content: the Digital Safety Commissioner of Canada to administer the legislated framework; the Digital Recourse Council of Canada, which would act as an appeal body; and an Advisory Board that would provide expert advice to both about their processes and decision-making, but not implicate itself in the specific content-moderating decision by either.

The proposed system would target five categories of harmful content: terrorist content; content that incites violence; hate speech; the non-consensual sharing of intimate images; and child sexual exploitation content. It also proposes that an online platform be subject to a 24-hour takedown notice if any of its content meets these legislated definitions. Most observers say this is unworkable.

Richard Marceau, vice president of external affairs and general counsel at the Centre for Israel and Jewish Affairs, says he is disappointed that the government chose to split up the consultation from Bill C-36. He had been hoping for swifter action from the government.

"Hopefully, this gets taken on by whoever wins the election," Marceau says. "There is no doubt that online hate needs to be tackled seriously, because online hate has direct, real-life consequences. People spend a lot more time online than they used to, and the danger of hate and radicalization is even more present than before."

According to Marceau, there are a number of promising elements, including the independent regulator for online content, as well as developing clear regulations for social media companies.

"There was a report that came out that showed that 84% of antisemitic content that was flagged was not taken down," Marceau says. "The industry cannot be left to its own devices because it has shown that it cannot self-regulate in a way that is needed to combat this cancer."

Marceau also appreciates that the consultation proposes an easier mechanism to flag hateful content.

"We are happy that the government took our suggestion that the definition of hatred matches the Supreme Court of Canada jurisprudence in Whatcott, and it's an essential point," says Marceau. "There will be a big discussion about freedom of speech. Whatcott was very clear that the bar for what is considered hated and thus can be sanctioned is very high. It's not a question of stopping people from saying what they want, or censoring people."

One challenge is that deep-pocketed companies can easily shrug off fines. Marceau says that the government should consider making directors of social media companies who don't take down hateful content when flagged personally liable.

Emily Laidlaw, the Canada Research Chair in Cybersecurity Law at the University of Calgary, says the consultation shows the challenge in balancing free expression and the right to equality and privacy.

"They need to go back and re-scope and redraft portions of this to properly balance those different rights," says Laidlaw.

Laidlaw is in favour of the idea and structure of the Digital Safety Commission, provided its scope and powers are limited and that the body has the resources to do its job. "Unless they have the right people with the right training, this will be a disaster," says Laidlaw."

David Fraser, a privacy lawyer and partner with McInnes Cooper in Halifax, is less impressed. "The consultation is largely a sham," he says. 

"Most of the people I've spoken to internationally, who work for companies who have to deal with these [proposals] across multiple jurisdictions, the consensus is that this is one of the worst."

Of the five categories of harm targeted, several are already criminal offences, he says. And when it comes to inciting violence, police already have a hard time determining whether or not to lay charges.

Our courts have a hard enough time interpreting these categories with precision, he notes. It's therefore a lot to ask of online platforms to moderate content and "build systems to detect and render inaccessible anything that fits into these categories." What's more, "there also has to be a mechanism by which anybody can complain, and that complaint has to be handled within 24 hours – and if they get it wrong, they can be subject to significant penalties," Fraser says.

Fraser adds that he has concerns about the mandatory reporting requirements regarding user data involved in child pornography. These would be imposed without a warrant or judicial oversight. If the user is located in Europe, turning over such information would violate the EU's General Data Protection Regulation, or GDPR. There is also the ability to get a court order to ensure that all internet service providers block websites deemed to be egregious violators.

"This opens the door for other website blocking orders on other bases," says Fraser. "It will be very difficult for anybody to give effect to [this proposed legislation]."

Laidlaw says that the obligation to monitor websites is very problematic because it creates a huge privacy risk.

"You're essentially saying that a private body needs to actively monitor and surveil all of the different communications on its platform," says Laidlaw. "What's being proposed here is just a blanket obligation to monitor. I haven't really seen that from a Western government; that's so broad in scope."

Lex Gill, a lawyer in Montreal affiliated with the Citizen Lab, but speaking on her own behalf, says that she is frustrated that the government was aware that there were constitutional and civil liberties issues in what was being proposed in the document.

"There was a ground-breaking report put out by LEAF talking about intermediary liability issues as it relates to violence against women in particular," Gill says. "It was incredibly constructive. There is not even a hint that the government read it before producing this document."

Gill also notes that nobody previously consulted by the government was asking for new powers for police and intelligence agencies, noting that some of what is being proposed in this regard raises similar constitutional issues as previous anti-terrorism bills.

"Censorship technology is inherently surveillance technology," Gill says. "You can't have infrastructure that removes content at scale – which is what this proposal is about – without building technology that monitors, judges, categorizes and surveils individual users' activity to facilitate content removal."

Gill adds that once this technology is in place, it becomes trivially easy to add new content categories to surveil and filter.

"It becomes function creep – that technology that's built for one purpose is easily repurposed for another," says Gill.

As for the 24-hour takedown notices, the escalation of review of flagged material requires more time.

"The 24 hours makes a nice soundbite on the part of government, but it's unworkable," says Fraser who adds that websites are likely to err on the side of censorship. "If they leave something up, and the Commissioner disagrees with that, then they can be subject to penalties. There's no downside to removing content other than removing lawfully-existing content from the internet that is otherwise completely legal."

There is empirical evidence that a takedown regime like this can lead to the removal of legal content, says Laidlaw. But the appeal process built into the proposed regime can be a way of addressing the problem. According to Marceau, a means to take down hateful content quickly, coupled with an efficient appeal mechanism, is the way to go.

"Technology is moving very fast, and artificial intelligence is moving very fast, and the tech industry should also put some resources there because it is now the public square," says Marceau. "A big part of the public square is in corporate hands, and that comes with responsibilities, and one of them is that they put the right amount of resources in to ensure that it's not taken over by hate-mongers."

Nor are automated takedown systems the solution. They tend to have bias built into them, and therefore need the intervention of human review and an appeal process, which involves more flexible timelines.

Suppose one wanted a process that ensured procedural fairness and fit within the Charter's guardrails. In that case, Fraser says there would be an expedited process similar to getting a peace bond by making an application in a provincial court, and where every respectable platform would obey a court-ordered takedown.

There are also concerns that the proposed Digital Safety Commissioner would be an activist eager to remove content. A judge would be considered more impartial, says Fraser.

But that could also be unrealistic given the potential volume of cases that would wind up in the courts. "A court process isn't fast," Laidlaw notes. "It may not be faster with this recourse body they're proposing, but given the volume and speed of this, the idea that you take some of this out of the court system and get a specialized body of experts in the field, it makes sense that this could potentially work."

Ultimately, efficiently addressing online harms is never going to be easy. "If we had this prized solution, it would be out there already," Laidlaw says. "But I'm happy they're trying."