Hot AI Summer has certainly been a ride. It’s felt like every week we’ve seen significant AI news — new laws, international agreements, and an inexorable acceleration of the tech itself. It’s been a lot to absorb.
With the federal Parliament set to begin its fall sitting in just a few weeks, this is the perfect time to get up to speed with Bill C-27, which includes Canada’s draft AI law, the Artificial Intelligence and Data Act (AIDA). C-27 is a big bill that also includes Canada’s first significant update to private sector privacy law in a generation (well, the first since this government’s last attempt died at the dissolution of the last Parliament), and the Industry and Science Committee (INDU) will have its hands full getting to grips with this very consequential piece of legislation.
Like the EU’s AI Act, AIDA is basically structured around making rules around a category, in this case what the law calls ‘high-impact’ systems. Unlike the EU, however, the bill doesn’t define ‘high-impact.’ This is being left to the regulatory process that will happen after the law is passed. Really, beyond indicating that the law seeks to protect Canadians from biased outputs and harms, there isn’t much about the substance of any rules.
In a nutshell, the law creates the “high-impact” designation and says that those high-impact AI systems will be subject to regulations. But what counts as high impact, and what regulations will govern those systems, will all be left to bureaucrats to write later.
At one level, this is quite sensible. Legislation is cumbersome, and Parliament’s throughput is not fast. This is not necessarily a bad thing, but it does make constant legislative tweaks very onerous. So, taking the detailed, prescriptive elements of AI governance out of the scope of legislation isn’t a bad thing.
On the other hand, though, AI investment decisions are being made now. Uncertainty is bad. The prospect of a Canadian framework law that could include future rules that put us out of step with other major markets was not ideal. To elaborate on their intentions, ISED took the somewhat unusual step of publishing a (helpful) companion document to the law that puts a bit more meat on the bones of the government’s game plan on AI governance.
One key takeaway from the companion document is that the government would like its definition of ‘high-impact’ to maximize interoperability with laws in other jurisdictions, and consider a range of factors, including scale of deployment and risk to health and safety. That’s a very good thing. Another important nugget is the government’s proposed timeline, with consultations unfolding after passage and regulations coming into force in 2025 at earliest.
A new AI and Data Commissioner provided for in AIDA would be responsible for administration, including compliance and enforcement. The companion document indicates that the Commissioner would take an initially light-touch or educational approach to those elements of their mandate.
The companion document also gives some insight into what regulation might look like. Businesses would be required to identify and address risks to human rights or harms posed by their uses of AI, and may have to have measures in place to ensure human oversight, transparency, accountability, and robustness. There might be a different regulatory burden for businesses interacting with high-impact systems depending on how they interact with AI systems, i.e. as developers vs. as end users.
Before we get too deep into what we think AIDA needs, this is a good occasion to reflect on the stakes. A lot of people think AI can do anything and even everything. But so far it hasn’t demonstrated any ability to fundamentally change the economics of innovation.
The knowledge-based economy has a winner-take-most structure where success is a self-building moat. Intellectual property rights (IPRs, e.g. patents) lock out potential competitors. Network effects in users and in data, wherein the marginal user or data point makes all of the others or data more valuable because they enable new connections or allow for new correlations to be drawn, are tremendously impactful.
Canada has strengths in AI, particularly in research. But we’re not as strong in the most critical element — scaling companies. Canada has had an AI strategy since 2017. We were an early mover, one of the first countries to put one in place. But unfortunately for us, this strategy — focused as it was on attracting top global talent to do their research here — was missing this important lens. Of the 250 or so IPRs generated through the strategy, 75% are owned by foreign entities, and especially American tech giants like Uber, Facebook and Google.
If we accept that success in the innovation economy tends to breed success, then doing what you can to promote the growth of competitive Canadian companies to become global players at this extremely promising technological frontier is the obvious public policy priority to pursue. This is not an easy thing to achieve, but late attempts to catch up will be even harder.
So how can AIDA lay a strong foundation for globally competitive Canadian companies?
Tuning Up AIDA
The EU approach is relatively close to ours but is a bit more cumbersome. The US has a disordered approach with a too-light touch that won’t promote trust in American products. The UK is pursuing a bit of an outlier compared to their major partners, which is a drag on their approach even if it is clever.
Canada has an opportunity to bring the best of each system together and create a trusted framework and an economical and nimble approach to regulation.
The first step is promoting trust. AIDA’s overall approach here is good but could be strengthened. These technologies are novel, and will have big impacts. The law should go a bit out of its way to clearly state what Canadians can expect in terms of protections and obligations, perhaps in a preamble. The second thing it should do is to create a parallel institution to the government’s AI and Data Commissioner: a new Parliamentary Science and Technology Officer, housed in Parliament, to inform Canadians and parliamentarians about new technologies and their impact with a public interest lens. Not only would this give Canadians the assurance of a watchdog, a mandate to inform Parliamentarians would ensure that future legislation is stronger.
The second critical piece is around regulations. These should really aim to be as clear and simple as possible and should also allow for the use of [regulatory sandboxes](https://letstalkfederalregulations.ca/sandboxes?tool=story_telling_tool#:~:text=Also known as a regulatory,regulatory changes or decisions – all) to pilot novel uses. Very importantly, the law should allow, as the EU AI Act does, for the recognition of standards within the scope of the law. Saving government from having to develop regulations when a transparent, comprehensive standard that our public service deems adequate to the spirit of our policy approach would help us roll out a more complete apparatus faster — essentially, it would save us having to do our own work and put us in the enviable position of merely copying the homework of the best experts. That standards recognition usually means easier interoperability across national borders is a (considerable) bonus.
Beyond the Law
Once the law is passed, the hard work will start. We should aim to roll this out a fairly complete code as quickly as feasible — things like standards recognition, mentioned above, will help considerably with that. But so too will prioritizing development of regulations around more sensitive uses first. What we essentially need is a ‘minimum viable product’ that industry can plan around.
The idea that this law won’t come into force until 2025 is far too slow.
Finally, we’ll want to ensure as much international alignment as we can while maintaining trust in our systems — but more speed and clarity earlier in the process will help us set the terms of negotiations and working groups more than lagging would.
There is an impulse — to which I am sympathetic — that all of this needs to slow down and that what these technologies need and deserve is careful deliberation. Unfortunately, neither AI technologies nor the economics of innovation will wait for anyone to catch up with them. And the barriers to use and adoption are low. All it takes to develop or use AI tools is an internet connection (and expensive computing capacity). A strong, flexible governance framework enabling strong companies are the best assets Canada can contribute to the global AI gold rush.
CCI Mooseworks is the Canadian innovation and economic policy newsletter of the Council of Canadian Innovators — a national business council dedicated exclusively to high-growth scale-up technology companies.