Skip to Content

Reforming privacy in the age of AI

Responsible adoption of AI means we’re going to have to take a serious look at our privacy rules.

Image of phone on white background with an eye staring out to illustrate the privacy challenges presented by the use of AI

Artificial intelligence is starting to look like one of the bad guys, blamed for aiding the creation of the “surveillance” economy and wrecking democracy. That’s bad news for Canada, which counts on AI to power its future. Will policymakers step in?

After years of focusing on the potential benefits, governments around the world are finally acknowledging the possible costs of the way firms currently use machine learning, including the effects of relatively unchecked data collection, the lifeblood of AI. Indeed, the entire business model that runs the internet — AI-enabled individually targeted advertising — is under scrutiny as never before. This was inevitable given the near-constant tech scandals of the past 12 months that have rendered the industry toxic in the eyes of many. Firms are now anxiously awaiting the roll-out of new guidelines.

Following a House of Commons review that called for Canadian policymakers to “act urgently to better protect the privacy of Canadians,” the Trudeau government said it will help set up an international AI panel that will hopefully lead to a greater understanding of the duality of the technology and ways to rectify the more socially polluting side-effects.

“It’s been one piece of bad news after another other,” said Blayne Haggart, associate professor of political science at Brock University. “The [Canadian] government is finally realizing that this is an issue they have to deal with.”

The jury is out whether this increased scrutiny and reflection will translate into much stiffer regulations, or simply a tweak to current laws. Much will depend on public demands for change. Whether meaningful reforms will come any time soon is also debatable, particularly as a federal election is months away. Still, experts say some sort of shake-up of the country’s 20-year-old data regulations — Precambrian as far as technology is concerned — is in the cards.

“There is growing recognition that [privacy laws] may be in need of updating,” said Patricia Kosseim, counsel in Osler’s Privacy and Data Management Group and formerly senior general counsel at the Office of the Privacy Commissioner of Canada. “But it’s unclear what stakeholders really want. While there seem to be more voices advocating for change today than there were several years ago, they have different reasons why and diverging views as to how.”

One view is that building an international cooperation and standards model is a more rational approach — one that recognizes that data gathering is unlikely to go away, while acknowledging the need to shore up privacy regulations. “It doesn’t make much sense to create a slew of new regulations at home without seeing what that coalition might agree on internationally,” Robert Gorbet, Chair of the Department of Knowledge Integration at the University of Waterloo, said in an email. “We wouldn’t want to hobble Canadian industry on the one hand, nor fall short of protections on the other.”

The tech sector in general and AI in particular is a big deal for Canada, which punches way above its weight in terms of global influence. As the birthplace of one of the most promising and lucrative technologies — the creation of machines and software that can process sophisticated information — Canada was one of the first countries to develop a national strategy on AI, with recognized centres of excellence in Montreal, Toronto, and Vancouver. Canada wants to help frame the debates around privacy regulations.

Few argue with the fact that AI has no agency — it is a tool that merely lets businesses and governments better achieve the goals they have set out. At its best, AI holds the promise of higher productivity, cost savings and vision thanks to the tech’s ability to find patterns in an ocean of data and automate tedious tasks. Experts say finding cures for diseases like cancer and modelling weather patterns will become easier thanks to AI. With the U.S., China and tech behemoths all committing billions to this area, competition to develop more accurate predictive algorithms will only heat up, in turn ramping up the need for ever-greater access to personal information to feed those algorithms.

While it’s true that AI has all sorts of useful applications, the bulk of projects reside firmly with billion-dollar ad-based firms like Google and Facebook. That’s because machine learning is instrumental not only in keeping customers glued to devices, but also in collecting every possible human experience, predicting and possibly even manipulating their targets. Geolocation adds even more controversial capabilities.

“They see people as a data sponges and this is inherently problematic from a basic human rights level," added Haggart.

The fact that algorithms often get it wrong — thanks to pesky bias issues and goofy correlations — hasn’t dampened industry’s enthusiasm either for targeting or for shaping decisions through AI.

How did we get here? Lax laws and enforcement, experts say. This is partly due to policymakers’ horror at chilling innovation as well as peoples’ willingness to trade the minutiae of their lives for free stuff. Torn between wanting to cash in on AI but aware of tech’s dark side, politicians around the world are simultaneously funding programs, accepting industry donations, and roasting CEOs in front of government committees.

Some jurisdictions, notably Europe, have already wielded their sharp, red pencils, and this alone may force a change in Canada because of the need for adequacy.  Under the General Data Protection Regulation (GDPR), companies must delete data on request, and information can only be gathered if there’s a specific business purpose. The legislation is proactive, as opposed to the current system in Canada, where privacy laws operate on a complaint-only basis.

In Canada, this also means taking another look at the Personal Information Protection and Electronic Documents Act, or PIPEDA, which governs how firms gather personal information. It may mean giving more firepower to the Privacy Commissioner, which at the moment is constrained in its abilities to enforce rules.

“At least until recently, the government believed that the existing [rules] along with the Charter could handle any of the privacy and data ownership/use issues that might arise with artificial intelligence and data-oriented tech companies more generally. There is a sense among privacy experts and ethicists that these might not be sufficient,” Daniel Munro, at the University of Toronto’s Munk School of Global Affairs and Public Policy,  said in an email.

Now that GDPR is in force and the tech roof hasn’t caved in, it’s much easier for activists elsewhere to call for similar legislation, said Timothy Libert at the School of Computer Science, Carnegie Mellon University in Pittsburgh. “It’s just a question of flipping a switch. It’s already proven that Europe has not shut down as a result of GDPR,” Libert said. "Are they going to make companies turn on these features for North Americans? It’s possible, but the question is: how strong are the lobbyists?”

It’s also unclear to what extent the newly established AI panel, modelled on the Intergovernmental Panel on Climate Change, will broach the more sensitive issues, or whether it will stick to broader themes. So far, few details have emerged beyond the very broad mandate to “support and guide the responsible adoption of AI that is human-centric and grounded in human rights, inclusion, diversity, innovation, and economic growth.” The possible themes for the panel’s activities, however, list data collection first.

As with all things tech, experts say the Canadian government is unlikely to move very quickly. But global change may come anyway. The threat of increased regulation, potential lawsuits, and consistently horrible press are having an effect, with investors already eyeing tech firms’ non-advertising bets. As more and more people turn to ad blockers to protect their personal lives, companies realize that spying on your customers may not be such a good business model after all.

“In the long term, I just don’t think it’s a sustainable model,” said Carnegie Mellon’s Libert.