Skip to Content

Getting the law up to speed on AI

ChatGPT and Dall-E are game changers. The legal profession needs to step in to help mitigate the growing risks posed by AI to the public interest.

DALL·E A drawing of a lawyer speaking to an AI machine taking notes

It is often remarked that technologies advance at a much faster pace than the laws which govern them. The most recent advancements in generative AI, ChatGPT and Dall-E, are the latest illustrations of this, raising difficult questions lawmakers will need to answer sooner than later. Both apps present challenges for Canadian law as currently constructed.

Over the past 15 years, the field of artificial intelligence and machine learning has seen enormous growth and change. Some products have become household names — Siri in our pockets, and Alexa on the nation’s shelves. Their use has become common among younger and older generations. We rely on algorithms on Netflix and YouTube daily, barely without a care.­­­­­ Their ubiquity hardly seems worth debating in the House of Commons. But as these technologies get exponentially more sophisticated each year, we need to consider how they pose a danger to the public interest. 

ChatGPT and Dall-E, both launched in 2022, could mark the beginning of a new era. Lawyers be warned. 

It is reasonable to expect that the two apps will revolutionize the way ordinary Canadians think about technology and artificial intelligence. Chat GPT is a highly sophisticated chatbot, which responds to queries and prompts, and learns while it does so. In a class of its own, it is capable of spitting out sophisticated essays, short stories and research responses. No other software on earth can generate the level of literary and research quality that ChatGPT can generate. A prototype of the tool was released in November.

Dall-E provides a similar function, but instead it produces artificial images. Pronounced “Dolly,” it is a deep-learning artificial intelligence system capable of generating original and detailed creative images of any kind from mere textual inputs. Similarly ahead of the game as ChatGPT, Dall-E can generate original picasso-esk and michaelangelo-esk images in seconds. Its most updated version was released in July 2022.

The instant and sophisticated nature of these technologies makes them both appealing for mass use, but problematic for legal regulation. For instance, the ramifications for criminal law could be significant. These tools could revolutionize cybercrime and require an equally novel response from lawmakers and regulators. The impacts will be felt in labour law, too. AI that can write essays and perform research could cause seismic shifts in the labour market and greatly upset the compromises which have dominated the field up to this point. The most obvious area of impact will no doubt be intellectual property (IP). 

If these technologies generate images or texts that are substantially similar to existing copyrighted works, they could infringe the copyrights of such works. The same applies to trademark infringement, where AI-generated images and texts could contravene the rights of the owners of substantially similar trademarks. Asking for permission from the original author is the obvious solution to mitigate these risks. However, this may be impractical for the image-generating consumer, as it will take an excessive amount of time and money if one intends to stock-pile or mass-produce texts or images.

A more practical solution may be legislation or regulations to mitigate the risk upfront. Unfortunately, IP is likely one of many areas that require legislative attention. Privacy law will also be exposed to new risks. If AI-generated images or texts depict real people, and the content is shared or distributed without their consent, it will raise significant alarms.

Safeguards will be necessary. While the new technologies hold enormous promise, they also present the risk of extreme harm — harm that likely extends beyond what we can currently conceive. We’ll have to implement appropriate controls to prevent the misuse of these new technologies. 

We do not pretend to have all the answers. Our intention is simply to help raise awareness of this infinitely-iterative and unprecedented technology and the corresponding risks to the public interest. The practice of law will inevitably feel its impact, but it is far from certain that substantive law will adapt to respond to the new risks unless we choose to make it so. As legal professionals, it is our ethical and professional obligation to reflect on the implications of dangerous global technologies, and carefully consider how to respond.

A good starting point is to develop features in our laws concerning transparency, accountability, safety, fairness and human oversight. As technology progresses, other needs will become apparent.

It is also essential to consider the specific context and goals of the AI system in question when developing legal frameworks for regulating it. For example, the legal framework for regulating AI in the healthcare industry may differ from the framework for regulating AI in the financial industry. However, as they relate to public-facing consumer products, such as text- and image-generating services, there should be some blanket protective measures ensured by law to protect the public interest. 

As with every area of law, there is no single answer to how we should approach the regulation of unique circumstances. However, as legal professionals, we must protect the public interest and do so in a timely fashion. We must anticipate how AI will affect the public and what to do about it now. We must anticipate risk and work at reducing risk now. We won’t accomplish our roles as guardians of the public interest by sitting around and waiting. 

It is time again for the legal profession to step up. This is no minor issue. As sophisticated as AI technologies are today, it is nothing compared to the AI technology of the near future. The growth rate of this technology is exponential. 

We need legal think tanks and lobby groups to consider these risks and involve technical experts, machine-learning engineers and coders in our search for solutions. We need to operate outside of our silos. 

There are three questions that every lawyer concerned about AI should consider: 

​How should the substantive law change protect the public interest regarding AI technology? How will the practice of law change and force us to adapt before they happen? And what other risks could artificial intelligence tools pose to the public that we haven’t thought of yet?