Skip to Content

Is Gen AI worth the hassle and risk?

Given its sticky and persistent problems, including a lack of transparency, some question whether CEOs and boards can trust it

Skyscrapers against a blue sky
iStock/Chunyip Wong

New-generation artificial intelligence tools like ChatGPT represent huge opportunities for the business community in Canada and beyond.

But can CEOs and their boards trust them?

These programs — which can spew out humanlike text, images, music, and code in seconds — can write reports, compose songs, and even create websites.

Some analysts estimate that generative artificial intelligence (GenAI) could add as much as US$4.4 trillion in value annually to companies by cutting costs and increasing productivity.

But this new technology comes with a lot of baggage, like offering up inaccurate content, amplifying biases and possibly breaking copyright and privacy laws. GenAI applications are also inherently opaque, making them difficult to scrutinize and correct when things go sideways.

Some corporate governance experts in Canada say GenAI’s challenges are not unique, and the potential gains are worth it. Assigning the correct duties relative to risk and supporting them with appropriate supervision can safely incorporate these programs into businesses. What’s more, the majority of enterprise leaders understand that these tools do not come without liabilities and are realistic about their prospects.

“At the end of the day, it’s just another piece of technology,” says Andrew MacDougall, a partner at Osler and expert in corporate governance.

“Is there accountability? Are there the right processes and procedures? The important thing is for CEOs to put in place sufficient guardrails to reduce the risk to an acceptable level. It’s really not any different from a CEO’s normal situation.”  

That said, firms must evaluate the situation case-by-case.

Striking the right balance between risk and opportunity matters a lot, and governments are scrambling to formulate frameworks for regulating fast-moving AI systems. In March, the European Parliament passed the AI Act, enacting the world’s first comprehensive set of laws on the use of AI from a major regulator.

Canada and the US are considering legislation that will offer protections without chilling innovation. While American companies still decisively lead in this space, with OpenAI, Google and Meta occupying top spots, Canadian firms punch way above their weight in terms of helping produce the technology.

AI is a top priority for the Trudeau government, with a recent announcement of federal investment in Canadian AI infrastructure, computing capacity and safeguards.

Some IT experts, however, question whether enterprise leaders — both in tech and non-tech settings — have the necessary knowledge and experience to assess the risks of GenAI, particularly given that these tools have only been in general circulation for less than two years.

“The sudden jump in the breadth and performance ability has outstripped our current evaluation methodologies,” says Mohamed Abdalla, a professor at the University of Alberta specializing in AI.

“We do not know what constitutes ‘right mitigations.’ We don’t yet possess mechanisms to evaluate AI systems’ reliability, fairness, and security.”

OpenAI launched ChatGPT in November 2022, reaching one million users in just five days. Since then, competitors have fallen over themselves to create similar products. GenAI is now considered the fastest spreading technology humankind has ever seen.

However, there is no easy fix for some of GenAI’s stickiest problems, particularly hallucinations — the outputs that sound very plausible, are delivered confidently, and are also completely false. New York City discovered how embarrassing this can be when its AI chatbot was caught telling businesses to break the law.

Abdalla says that despite the vast amount of time and resources GenAI creators have spent to prevent its misuse and hallucinations, these issues have not been resolved.

Firms “don't have a firm methodology to reliably ensure stable, predictable, and ‘safe’ output in all settings,” he says. “This is an active research area with no defined best practices.”

Some skeptics say GenAI is a nascent, untrustworthy, and expensive technology that may not be worth the hassle, investment, or risk. After all, the last decade has seen several potentially “revolutionary” tech fizzle out after huge excitement, including self-driving cars, the Metaverse, and enterprise blockchain.

Part of the problem is that these new models are not designed to give users information that is based on truth or to arrive at conclusions through logic or experience. GenAI uses algorithms and large-language models (LLMs) to predict and generate its output. The systems do not “think” or “write” in any conventional way, even though the responses are very humanlike.

“That’s pretty much why those LLMs are subject to hallucinations,” Yann LeCun, chief AI scientist at Meta and one of the “godfathers of AI,” told an audience at the New York Academy of Sciences recently.

 “They can’t really reason; they can’t really plan. They basically just produce one word after the other, without really thinking in advance about what they’re going to say.”

Foundational model training data comes from all over the internet, which is not always wholesome and trustworthy. Some commentators even worry about intentional data poisoning or that we will run out of data and will have to rely on AI-sourced data.

There’s also the “black box” issue: AI’s lack of transparency and interpretability. CEOs and their teams, even those with engineering backgrounds, have little hope of ever understanding GenAI’s outputs, which are often a mystery even to their creators.

MacDougall says risk assessments of GenAI are at the top of boards’ agendas, particularly around privacy protection.

“It is one of those hot topics at every board session,” he says.

However, he does not expect GenAI to substantially change directors' and officers' liability insurance, also known as D&O insurance, which covers the cost of claims against directors. Some firms have flagged AI as a potential increased risk.

And what’s the risk of not adopting the technology? Waiting for more clarity could be costly for those who choose the slow lane. Microsoft and Google posted higher-than-expected earnings in April due to higher demand for AI services, including the Copilot AI assistant and Gemini chatbot.

It’s unclear to what extent corporate Canada has embraced GenAI. Statistics Canada estimates that the adoption rate is about 10 per cent overall, rising to almost one in four businesses in information and cultural industries.

Still, these tools are on the radar of most Canadian firms, particularly those in the tech sector, says Sam Ip, a partner in Osler’s technology group and an expert in artificial intelligence law.

“Almost every one of my clients is using it.”

It is also unknown how many individual workers are utilizing these chatbots — with or without permission. Earlier this year, the job-ratings website Glassdoor estimated that  62 per cent of professionals were using ChatGPT or similar tools at work. Some firms, including Samsung, have decided that the risks are just not worth it and have restricted their use.

For its part, the Canadian federal government has advised its employees to limit usage to low-stakes tasks like brainstorming and memo writing, where reliability and truthfulness are not an issue.

According to its AI guide, “Institutions must be cautious and evaluate the risks before they start using them.”

Further, “They should also limit the use of these tools to instances where they can manage the risks effectively.”