Skip to Content

Algorithmic policing in Canada

Law enforcement agencies across the country have started to invest in predictive policing methods. But to what extent is far from clear.

police lights on black background

We live in a world, one hopes, where algorithms can guide us in making well-informed, strategic decisions based on facts and science. But left unchecked, they can also lead us down the wrong path. And in the law enforcement context, the toll on human rights can be downright scary.

It's why, according to a recent report by Citizen Lab, we should impose a moratorium on the use of algorithmic policing technology in Canada.

"I cannot see a path forward that would explain why we, at this time in our justice system, should be investing in dangerous experiments that seem to have questionable utility in the countries that have already experimented with them," says Kate Robertson, a defence lawyer and co-author of the Citizen Lab report (produced in collaboration with the International Human Rights Program at the University of Toronto’s Faculty of Law.)

Around the world, algorithmic policing has drawn condemnation from critics who say it threatens civil liberties and risks perpetuating racial profiling. The Partnership on AI, a San Francisco-based organization backed by big tech companies, issued a report in 2019 warning of "serious shortcomings" in predictive policing tools used in the U.S. — notably in areas where racial minorities were already over-policed and under-protected.

"The human rights dangers surrounding the use of algorithms in criminal justice are very stark," says Robertson, who warns that Canada should be heeding the lessons learned elsewhere.

Starting in 2012, the Chicago Police Department used what was known as a "heat list" of potential aggressors and victims of gun violence to prevent crime. The list was compiled based on variables, such as prior criminal history, parole status, and police notes on suspected gang members. In January 2020, the CPD shut down the program because it was found to be unreliable.

Meanwhile, the New Orleans Police Department used software developed by Palantir Technologies, a data analytics company founded by Peter Thiel. Unbeknownst to the public, the software analyzed people's associations using a method known as social network analysis. According to a Verge article cited in the Citizen Lab reports, individuals accused of belonging to a gang weren't told about the software's use in their criminal proceedings. That program shut down in 2018.

With similar objectives in mind, the Los Angeles Police Department relied on Palantir software to analyze a broad mix of data from law enforcement sources as well as information about health and social services, social media, utility bills, and even phone records from pizza chains. Members of the LAPD's civilian oversight panel questioned the program's effectiveness, and it was dismantled in April 2019.

Predictive policing is a misnomer, says Robertson. We usually associate the term with a specific forecast about what will happen in the future. But in reality, law enforcement authorities are merely aggregating different types of datasets, including ones coming from the police. The data then feeds into an algorithm that is trained to recognize patterns. It then produces estimates of where crimes are most likely to happen or who is most likely to be victims or aggressors. Ultimately, it informs where police patrol, which suspects to target, and how they investigate crime. But the algorithms only "provide very generalized statistical guesses on a broader level," says Robertson. "[It] has to be understood that we're talking about an estimate and not a concrete prediction."

As sophisticated as the technology appears to be, it is "inseparable from the mistakes of our past," Robertson continues. "Never before have police services attempted to use broad-based generalizations about what has come before, in order to accuse an individual of a future crime. It's burdening individuals with stereotypes and discriminatory practices in the criminal justice and law enforcement systems in Canada in a way that's not compliant, as far as we can tell, with Canada's human rights obligations."

It's unclear, however, how extensive their use is in Canada. According to the Citizen Lab report, the only police services to have deployed predictive policing software are the Vancouver Police Department and the Saskatoon Police Services in Saskatchewan. The latter is focused primarily on identifying children and youth who are at risk of going missing. Still, there is an interest in "extending that program to other areas such as drug trafficking," says Robertson. The analytics lab set up to carry it out is reportedly expected to invest nearly $2 million over two years in the project.

There are also reports of algorithmic policing technologies under development elsewhere in Canada. The City of Calgary reportedly paid $1.4 million to Palantir Technologies Inc. to partner with the Calgary Police Service for three years. And the York Regional Police Service budgeted $1.68 million in 2019 to invest in a facial recognition system. (In 2020, Detroit Police Chief James Craig reported that facial recognition technology used by his department failed to identify people correctly 96 per cent of the time).

The actual cost of investment in algorithmic technology (from development to implementation) is probably significantly higher, says Robertson. "We don't know how much has been poured into experimentation with algorithmic technologies at a total level," she says. Robertson's team researched the issue for close to two years, in large part relying on Freedom of Information Act requests and interviews with some law enforcement services. 

At a time when police budgets are coming under increasing scrutiny across North America, there is a human dimension that we need to consider, she adds. Indeed, the deployment of predictive technologies risks diverting resources away from mental health treatment and other social services with the potential to keep people out of the criminal justice system.

"Given that algorithmic technologies are not attempting to address in any way the root causes of the harm that the criminal justice system is trying to reckon with, the public should have a right to understand that those choices do come at a cost," says Robertson. "And when we're looking at their potential benefits, it's quite clear that there's no way in which these technologies are intending to try to readdress some of those underlying problems."

Among its many recommendations, the report urges impose moratoriums on predictive policing until such a time they can make reliability, necessity, and proportionality prerequisite conditions for its use by law enforcement agencies.

And at a minimum, an algorithmic policing program needs to be subject to robust oversight and transparency — to allow the courts to weigh in on the legality of using these techniques. "By definition, if you're in court, looking at the human rights' impact of the use of the technology, you're coming at a problem at a late stage," says Robertson. With better transparency, at least the country can have a proper dialogue "about what kind of country we want to live in and what kind of justice reform we want to prioritize."