Skip to Content

A new model for regulating AI

Could global regulatory markets be better at enforcing government policies?

Lady justice on digital background

On June 10, Amazon announced it was imposing a moratorium on police use of its facial recognition technology. The system, called Rekognition, is infamously glitchy; it once identified Oprah Winfrey as a man and mixed up the headshots of 28 members of the U.S. Congress with faces in a mugshot database.

Embarrassing — but maybe not as embarrassing as the public statement Amazon issued announcing the moratorium, which appeared to blame governments for not making sure facial recognition AI systems work better.

“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” it said.

Every new form of technology comes with risks. The problem with artificial intelligence is that it’s moving too fast for regulation. Cash-strapped governments and their regulatory agencies are struggling to cope with an onslaught of new technologies, the effects of which they may not entirely understand. The consequences — for everything from politics to privacy public safety — could be grim.

A new academic paper proposes a novel approach. In it, Gillian Hadfield of U of T law and Jack Clark of the California-based research firm Open AI pitch a radical revamp of the regulatory field, away from proscriptive regulations imposed by government bodies and toward a public-private hybrid — “global regulatory markets.”

Here’s the idea: instead of directly regulating the corporations developing new AI tech, governments would create “regulatory markets” where private-sector regulators would compete for the right to regulate specific AI fields.

Governments would not draft regulations. Instead, they’d license private regulators and set standards for them to meet — overall safety standards for autonomous vehicles, for example. They would then establish regulatory markets and require the regulatory “targets” — the firms developing AI products — to purchase regulatory “services” from private companies.

These private regulators would compete with each other for the right to regulate the targets. Ideally, the markets would include as many competing players as possible, to encourage them to come up with new ways to meet the standards set by governments at the lowest possible cost (the incentive created by allowing the targets to hire their own regulators).

In doing so, private regulators wouldn’t have to rely on the proscriptive model of government regulation. They could innovate — they’d have to, in order to compete. Take the example of self-driving cars: the paper suggests a private regulator could use machine learning to analyze the data produced by the vehicles to pinpoint behaviour that elevates the risk of accidents beyond the benchmark set by government. The regulator could require that the company developing the cars address the problem in-house, or it could develop technology of its own.

Under this model, governments only decide what they want (lower accident rates, better privacy protections), not how they get it. The “how” is left up to the private regulators, which are accountable to the government through licensing. Governments regulate the regulatory markets, keeping tabs on outcomes by gathering statistics or auditing, while the regulators do the rest.

In theory, the model’s strength lies in its use of market competition to put cutting-edge technology to work in regulation — something government regulators may lack the resources to do. The paper envisions regulatory “start-ups” emerging from AI-developing corporations themselves — splitting off into private regulatory markets and using AI tools developed by the corporations themselves to monitor compliance with government standards.

The paper’s authors acknowledge the model isn’t free of risk. Markets fall short when they don’t have enough players; in AI fields dominated by only a few large companies, it might be challenging to assemble a regulatory market large enough to foster competition.

Regulatory “capture” — the corruption of a regulatory body by its targets — is another real risk, given the private regulator’s interest in retaining the target’s business. “Protecting the integrity of regulation,” says the paper, “will require governments to monitor the results achieved by private regulators and to have effective threats to condition, suspend, or revoke the licenses of regulators that skimp on performance in order to win the business of targets.”

“Absolutely, (regulatory capture) is a real risk here,” said Maya Medeiros of Norton Rose Fulbright Canada LLP, an intellectual property lawyer who has done a lot of work in tech sectors. “That necessary independence is not visible here.”

Another flaw in this model is the one inherent in all regulatory regimes; they’re only as good as the governments that set their rules. The pursuit of new AI technologies is an arena of great power competition, just as nuclear weapons were over a generation ago. Regulatory markets might work great in jurisdictions where there’s a bright line between the public and private sectors — but what might happen to them in countries where there is no such clear division, such as China?

“I’m not sure that this paper fully addresses that problem. I don’t think it does,” said Kenneth Jull of Gardiner Roberts LLP, whose practice covers corporate commercial and technology law.

“Look at 5G, at the claims that it could be used in government intelligence gathering. The debate over that technology is tainted by allegations regarding Huawei’s relationship with Beijing. It’s all very political now.”