Blog post by
Douglas Davidson Kelly
Senior Market & Sector Intelligence Analyst, BSI
In recent blog posts we’ve discussed concepts like risk, uncertainty, trustworthiness, safety, and security in the context of AI and machine learning. While these are priorities for organisations using or developing AI systems, there is the question of how they can be tested against regulatory frameworks, such as that created by the forthcoming EU AI Act.
In this post we explore the potential benefits of AI regulatory sandboxes – a new twist on the kinds of sandboxes that have been successfully deployed by more than 100 regulators worldwide, in areas like financial technology (‘fintech’), telecommunications and energy. AI regulatory sandboxes are increasingly being viewed as playing a key role in testing AI solutions for regulatory compliance. We also look at emerging research and directions different nations are taking toward their deployment.
What is a regulatory sandbox?
The term ‘sandbox’ comes from the concept of a child’s sandbox: a safe environment in which to experiment with and learn from new ideas without being subject to the strict rules and measures characteristic of the outside world. Regulatory sandboxes, which often focus on a particular sector, enable businesses and regulators to experiment in a similar way with technological innovations and policy requirements. Key features of regulatory sandboxes include clear entry and exit requirements, consumer safeguards, and some leeway from existing rules and regulations to enable testing and experimentation. Regulatory sandboxes are increasingly seen as a key regulatory approach when dealing with complex technologies like AI, partly due to their ability to support regulatory learning. The OECD and the European Union have both endorsed the use of sandboxes for AI, and Spain is leading on the EU’s first AI sandbox program.
What are the potential benefits of an AI regulatory sandboxes?
AI Regulatory sandboxes have four key benefits:
- Reducing the time and cost of commercializing AI products and services. There is a growing base of evidence which shows fintech sandboxes successfully reducing the time and cost of commercializing new solutions in financial services. AI is characterised by an equally or more complex regulatory terrain, with corresponding expected benefits in this respect.
- Supporting improved regulation via regulatory learning. The sandbox provides a mechanism for regulatory bodies to learn about new AI-enabled technologies, products, and practices, better equipping them to make suitable adjustments to regulatory requirements and policies. Without AI sandboxes, regulators (and, hence, regulation) may struggle to keep pace with developments in AI—and that could negatively impact consumers and businesses alike.
- Improving AI standards development processes and enabling the development of standards alongside regulation. The sandbox model could also be applied to standards making, where it would provide a means for standard development organisations to develop more impactful standards at pace by taking advantage of quicker learning and feedback. In this case, sandboxes would provide the opportunity to directly discuss and test ideas with prompt expert feedback on standards. A combination of a standards-making and a regulatory sandbox could also be used for standards and regulation to be developed simultaneously.
- Enabling increased market participation by small and medium sized enterprises (SMEs). It is widely believed that compliance with “complex regulation” can be more challenging and expensive for SMEs than for larger companies. A sandbox which trials regulations with smaller organizations—and generates input for the development of standards—may help counteract this effect, especially if the advice on regulatory compliance is provided for free.
The EU AI Act and the Spanish sandbox pilot
The current draft text of the EU AI Act calls for sandboxes to be established with the aims of facilitating compliance with regulation, establishing legal certainty for innovators and accelerating market access, particularly for SMEs. Spain is taking the lead on development of a pilot AI regulatory sandbox to test this approach, with 10 other EU member states planning to establish their own. The Spanish pilot will report on its findings in late 2023, after the country has assumed the presidency of the Council of the European Union.
Spain has created a new regulator, the State Agency for the Supervision of Artificial Intelligence (AESIA). This agency will be responsible for the development, supervision, and monitoring of projects within the framework of Spain’s National AI Strategy, as well as projects in the context of the European Union. This is a different approach to that being pursued in the UK, where the intent is to assign regulatory responsibilities for AI to existing regulators.
UK Government support for AI regulatory sandboxes
The UK Government has announced new forms of support for artificial intelligence as part of the Spring Budget 2023 announcement and, at the end of March, published a white paper on AI regulation in the UK. The paper outlines a multi-agency approach toward regulation, which tasks existing regulators with developing approaches to AI regulation suitable to their respective remits. Guiding this approach are five principles to facilitate best practice in AI development, including safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
The plan includes the creation of a new £2 million regulatory sandbox to enable businesses to test and understand the relationship between their AI products and regulatory expectations. It is hoped this approach will support innovation and allow new ideas to come to market more rapidly.
The announcement has received broad support from regulators and industry representatives. The proposed sandbox approach for AI builds on the UK’s history as a global leader in pioneering such approaches. It also aligns in many ways with the AI Standards Hub’s objectives and activities. We will continue to monitor and help shape this approach through the AI Standards Hub.
The Hub will continue to work with our partners to understand and analyse the deployment and evolution of sandboxes for AI. We will also continue to learn from the approaches and benefits taken by established sandboxes and explore lessons for the application to AI.
We also plan to reach out to organizations involved in deploying regulatory sandboxes in other parts of the world, including the USA, Norway, and Switzerland, to better understand their approach to AI sandboxes.
We will report back with findings and more emerging developments as these sandbox pilots begin to take shape. We are also interested in hearing your thoughts on sandboxes and invite you to share your own experiences on AI or other software-related sandboxes in the comments below.