• Content Type

International trends in AI governance – part 1: Hard regulatory approaches

Blog post by

Arcangelo Leone de Castris

Researcher (The Alan Turing Institute)

Image of a globe

This is the first piece in a new series of blog posts exploring international trends in the governance of AI technologies. After a brief introduction to provide context, the blog post presents early findings from our ongoing research project on AI governance, highlighting policy initiatives of jurisdictions that, at present, have adopted what can be referred to as a ‘hard’ regulatory approach. The next blog post in the series will add to the current discussion by focusing on countries that have developed ‘soft’ regulatory approaches.

AI governance: a heterogenous international landscape

As countries strive to harness the benefits of AI-driven innovation, one of the main challenges they face is developing adequate policies to unlock potential opportunities while mitigating risks. Over-regulation could stifle innovation and slow down economic development, thus yielding a strategic advantage to international competitors; failing to set clear principles and rules for the development and deployment of AI, on the other hand, could lead to unethical and harmful applications of the technology. Being a first mover in establishing frameworks for AI governance can entail significant advantages for countries in terms of their ability to shape AI governance trends internationally.

As a result of individual jurisdictions trying to define their own approaches to governing AI, a heterogenous landscape has started to emerge internationally. Despite this heterogeneity, it is nevertheless possible to identify shared and contrasting characteristics in how different countries take on the challenge of regulating AI technologies. On the one hand, there are countries that adopt what may be referred to as a ‘hard’ approach. These countries rely on laws and regulations to set legally enforceable requirements for developing and deploying AI technologies. On the other hand, there are countries pursuing ‘soft’ approaches that tend to steer away from compulsory, top-down regulatory frameworks.

What countries have adopted hard rules to govern AI?

In our initial research phase, we surveyed nine jurisdictions.[1] Four of these jurisdictions are currently pursuing hard approaches, defined in the abovementioned sense. Two countries – Canada and China – have already passed enforceable rules for AI, while legislation in the EU and Brazil is still underway.

Canada has been an early mover in AI regulation, adopting its Directive on Automated Decision Making as early as April 2019. Entered into force after one year from its adoption, the Directive aims to ensure that, when using AI technologies to take administrative decisions, policymakers comply with administrative law principles such as transparency, accountability, legality, and procedural fairness. More recently, the Canadian government also tabled the AI and Data Act (AIDA), the country’s flagship AI regulation. Inspired by the European Union’s AI Act and informed by the Organization of Economic Co-operation and Development (OECD) AI Principles, the AIDA aims to set up a comprehensive, cross-sectoral, and risk-based regulatory framework to mitigate AI harms and promote responsible innovation. Tabled in June 2022 as part of Bill C-27, the AIDA is still pending Royal approval. Once approved, at least two years will be needed to develop the regulations necessary for the AIDA to be implemented.

While the scope of Canada’s Directive on Automated Decision Making is limited to AI applications in the public sector, China has adopted AI rules for private organisations. Over the past two years, in fact, China has adopted rules to limit the use of recommendation algorithms and ‘deep synthesis technologies’. Also, on 11 April 2023, China was the first country to propose a regulation targeting generative AI systems. Expected to enter into force before the end of 2023, this regulation includes obligations to design generative AI models that are “accurate and true,” that embody core socialist values, and that do not discriminate based on race, faith, and gender. Finally, on 31 May 2023, the State Council of the People’s Republic of China published its work plan for 2023, announcing that a draft AI law will soon be submitted for consideration to the Standing Committee of the National People’s Congress.

The EU is another major player in AI governance. Its Artificial Intelligence Act (AIA) – which will likely be adopted in early 2024 and enter into force after a grace period of two years from its adoption – is hailed as the first comprehensive regulation on AI. It will set rules applicable to all providers that place on the market or simply use AI systems within the EU’s jurisdiction. The AIA follows a risk-based approach which sets proportionally stringent requirements based on the risk profile attributed to AI systems relative to their use cases. The AIA has also been shaped by the Ethics Guidelines for Trustworthy AIdeveloped by the High-Level Expert Group on AI. Also, in September 2022, the European Commission proposed the AI Liability Directive. The Directive, which is still at an early negotiation state, will complement the regulatory framework laid out through the AIA by ensuring that European citizens can access effective redress routes in case of harms caused by AI products.

Finally, Brazil has recently started to discuss a Draft AI Law. While the law has been welcomed as the country’s most serious effort to regulate AI, a concrete timeline for approval is yet to be defined. As it stands, Brazil’s Draft AI Law takes inspiration from the EU AIA in that it proposes general rules for AI technologies applicable across sectors following a risk-based approach.


It is important to keep in mind that classifying countries’ approaches based on discrete categories such as ‘soft’ and ‘hard’ approaches to regulating AI means somewhat oversimplifying a much more nuanced reality. No country follows a purely hard or soft approach, rather combining elements of the two. Fundamental differences also exist between countries that can be grouped under the same high-level approach. Nevertheless, while it is important to be mindful of the nuances that get lost in the process of simplifying and comparing these diverse and complex approaches, it remains a useful exercise for navigating the evolving international AI policy landscape and enabling effective strategic foresight for policymaking.

[1] Brazil, Canada, China, the European Union, India, Singapore, South Korea, the UK, and the US.


Submit a Comment