Skip to Content

Keeping humans in the loop

How can we ensure legal professionals continue to think for themselves instead of rubber-stamping the decisions of our AI tools?

Julian the jurist, thinking

“Software is eating the world,” Marc Andreessen wrote some time ago. That’s not necessarily a bad thing. Today, machine-learning algorithms are improving and automating human performance in sectors ranging from medicine and financial services to manufacturing and retail.

Even our judicial institutions are looking at machine learning to improve administrative decision-making and service delivery. Last year, Justice Canada launched an AI pilot project intended to help guide decision-makers in immigration, pension benefit, and tax cases. Some provinces are looking at predictive risk assessment tools to guide them in their bail decisions.  Some US courts are already using them.

But as bureaucratic and legal decision makers rely increasingly on algorithms, we could be headed straight for a moral and political legitimacy crisis, says Pim Haselager, a philosopher and professor at Radboud University. Speaking at last month’s International Conference on Artificial Intelligence and Law in Montreal, Haselager also raised troubling questions about what the rise of “algorithmic governance” means for legal professionals.

Surely AI can be designed to help humans make better decisions. There is a risk, however, that we end up delegating too much decision-making to algorithms.  That, in turn, could lead to an erosion of an essential skill-set of ours.  Just as driverless cars could lead to the decline of our driving abilities, predictive algorithms in law could degrade our competency in making legal and moral decisions.

It will require effort to avoid that fate, says Haselager.  The main challenge, he says, will be to keep humans well connected to the information and decision making loops that involve machine learning.

There are both practical and principled reasons for this. On the practical side, we need to recognize that “some decisions are still most efficiently or reliably performed by humans,” compared to what machine learning can do, says Haselager. On the principled side, some decisions and actions really should only be performed by humans, such as sentencing offenders, or deciding to kill enemy combatants.

Ultimately that means ensuring that our human role in decision making remains a critical part of the process, either in environments where AI-based decisions become useful only with human command, or “under the supervision of a human operator who can override the robot’s actions.”

Otherwise, says Haselager, we run the risk of creating an accountability vacuum, and therefore a legitimacy crisis.

But then the difficulty is in figuring out how to “avoid human intervention in decision-making being more than a mere “stamp of approval,” he adds.  “Human supervision of AI as a mere stamp of approval not only leads to accountability confusion but can also undermine professional pride and commitment.”

So what does it mean to be in control? It’s one thing to believe we are in control because we think we are being intentional in what we are doing, or in contributing to some machine-learning activity.  But what if the human decision maker happens to be struggling with personal conflict at home, and is psychologically ill-disposed to respond appropriately on a given day?  Or, what if the person is ill-equipped to appreciate the real capabilities of the system with which they are interacting?  And what if the person decides to offload their moral responsibility onto the machine?

These situations are what Haselager calls technology-driven entrapment. If a technology is designed in such a way that people, for general psychological reasons, can’t focus on exercising meaningful control, then accountability will be deficient “by design.”

It’s easy to imagine how difficult it can be for a human to deviate from an algorithmic recommendation. Many of us are already slaves on the road to our GPS apps, and we curse ourselves when we dismiss voice navigation only to find ourselves stuck in traffic after a wrong turn.  Now imagine a judge being asked, after a sentencing decision that has produced a public outcry, why they didn’t follow the machine’s recommendation.

“There is a potential for scapegoating proximate human beings because conventional responsibility structures struggle to apportion responsibility to artificial entities,” says Haselager. And as more AI tools come into usage, fewer human operators will be able to override their decisions. As in with digital health tools in medicine, legal predictive tools will require that we develop standards in how we use them.

What’s more, to avoid situations where humans are inclined to abdicate their responsibility, Haselager proposes that we avoid introducing AI support tools in legal decision making unless there is a proper organizational structure in place that is definitive on what the rights of the human in refusing a recommendation are.  We have to have a clear understanding not just of how and when we should use the interface, but also when not to use it.

“Decision support systems should be accompanied by empowering systems” that take into account human strengths and weaknesses, Haselager concludes with a suggestion that we design machine learning tools with contestability features in them. So in addition to systems that think “along” a line of reasoning, we would also develop systems that think “against” it.

After all, an idle mind is one that doesn’t think.