Hot AI Summer, Pt. 1: The US and UK

July 18, 2023

By Abu Kamat

It’s hot AI summer.

After decades of foundational research and development work, in recent years artificial intelligence (AI) has been rapidly transforming the way we live, work, and interact with each other. At the same time, governments burned by the social media boom of the past decade have learned the lesson that such rapid technological advancement can have a dark side. Consequently, many countries around the world, including Canada, are developing laws and regulations to govern the use of AI technologies.

Just last Thursday, EU users got access to Google’s Bard AI assistant technology months after users in the US, UK and many other countries due to the EU’s GDPR privacy law and AI rules (more on them next month!), though with additional data and privacy protection. Canada is one of the last countries without access — but for once, this has nothing to do with a light- or heavy-touch approach to regulation. The federal government is locked in a knock-down fight over news funding with Google, Meta and other web giants and they’re choosing to retaliate.

The Canadian federal government recently tabled the Artificial Intelligence and Data Act (AIDA) as part of the broader Bill C-27, the Digital Charter Implementation Act. If passed, AIDA will regulate the design, development, and use of “high-impact” AI systems in the private sector. Other countries have already taken steps to regulate AI. As Parliament considers AIDA, government and industry decision-makers should consider the context of how other jurisdictions have approached regulating AI.

Today, let’s start with a look at the state of play in the UK and US, where early, patchwork action across jurisdictional and sector lines is giving way to a more concerted approach. Later this summer, we’ll take a look at EU and its more rigid regulatory approach.

United Kingdom: A Sector-by-Sector Approach

The UK’s Conservative government, despite frequent changes at the top since 2016, has consistently emphasized the knowledge economy as the cornerstone of a post-Brexit prosperity agenda. Consequently, they’re moving faster than most to regulate AI.

In 2021, the UK government published its National AI Strategy, which sets out a plan for the responsible adoption of AI technologies. The high-level strategy focuses on ethics, transparency, and accountability. The UK also established an AI Council in 2019 to advise the government on AI policy and regulation as well as the Centre for Data Ethics and Innovation, an independent advisory body that provides guidance on the responsible use of data-driven technologies. While the UK’s approach to AI regulation has been well-received by industry, some experts have challenged the lack of concrete regulations and enforceable standards.

More recently, in March 2023, the UK government published a white paper staking out what it called “A pro-innovation approach to AI regulation.” The white paper sets out a “flexible” approach to regulating AI that is intended to both build public trust and make it easier for businesses to grow. Rather than creating a new, dedicated regulatory body or single legal instrument for AI, the UK Government is encouraging regulators and departments to tailor strategies for individual sectors, with the goal of maintaining support for innovation and adaptability. The white paper outlines five principles that regulators must consider to facilitate the safe and innovative use of AI in their industries, including transparency, fairness, and accountability.

United States: A Fragmented Approach

Efforts in the US to regulate AI have mostly taken place at the state level — to date, there is no comprehensive federal law in place. California and Illinois have passed laws focused on data privacy and the use of AI. For example, the California Consumer Privacy Act (CCPA) requires companies to disclose what personal information they collect and how it will be used, including if it is used for AI or machine learning purposes.

Federal action is starting to take shape, however. In 2019, the White House directed federal agencies to prioritize AI research and development, establish AI governance structures, and promote workforce development to prepare for the impact of AI on the economy and society. Additionally, the National Institute of Standards and Technology (NIST) developed a framework for AI ethics, emphasizing transparency, explainability, and bias mitigation. While the US approach to AI regulation has been criticized for its lack of centralization and enforceability, some experts argue that a flexible approach is necessary to allow for innovation and growth in the AI sector.

From a policy perspective, the most notable step for the country took place In December 2022 when the White House’s Office of Science and Technology Policy released a blueprint for an AI Bill of Rights to define principles for the development and deployment of AI in the US.

This document will guide future federal AI-related policy in the US and could help to address some of the key challenges associated with AI development and deployment. However, the Bill of Rights has some clear blind spots and limitations such as the need to balance privacy and transparency concerns, and the need to ensure that AI systems do not perpetuate existing biases and inequalities.

Emerging Alignment and Cooperation

In addition to their respective domestic efforts, both countries have importantly recognized the need for international collaboration in addressing the challenges posed by AI. In June 2023, the UK and US jointly announced the Atlantic Declaration, a broad bilateral agreement that includes (as part of a long list) a commitment to closer cooperation on AI regulation, ethical standards, and sharing best practices.

The Atlantic Declaration’s AI provisions could be a significant step in global AI policy: two of the world’s biggest knowledge economies managing to harmonize best practices, alignment of regulatory approaches, ethical rules and guidelines, and cross-border data governance would set the standard for the rest of the world. Even short of that, by exchanging knowledge and experience, the US and UK can learn from each other’s approaches and develop more effective regulatory frameworks of their own. Reducing fragmentation would help create a level playing field for AI developers and users. Collaboration on ethical considerations ensures that AI systems uphold principles of fairness, transparency, and individual rights. Additionally, cooperation on cross-border data governance facilitates secure and responsible data flows for AI innovation while safeguarding privacy and security.

For Canadian policymakers, being left on the outside of a serious cooperative effort between our first- and third-largest trade partners, with whom we both share historical, cultural and institutional ties, is a problem. AI regulation is not, and cannot be, solely a national issue, especially for smaller countries. As we continue to refine our own AI regulatory framework with AIDA, Canada needs to actively engage in these cooperative activities. By participating in international initiatives, Canada can establish leadership in global AI governance, benefit from shared expertise, and ensure its regulatory framework remains aligned with evolving international standards. We have a robust AI sector in Canada, but if we make ourselves an outlier, we may undercut our national leaders.

Next month we will be taking a look at the European Union, which has been at the forefront of regulating advanced technologies. We will evaluate the public policy lessons for Canada from the General Data Protection Regulation (GDPR) and the accompanying Artificial Intelligence Act which pushes for a more stringent treatment of AI than the US and UK.

You can read a few of our recent posts here:

Mooseworks is the Council of Canadian Innovators' innovation policy series. To get posts like this delivered to your inbox twice a month, sign up for CCI's newsletter here.

CCI Team Members

Members

No items found.

JOIN CCI'S NEWSLETTER

Get the latest updates

By submitting your information, you are agreeing to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.