Welcome to the fourteenth edition of our newsletter. Read on to find out what has been happening with the AI Standards Hub over the past few weeks.

Welcome to our regular newsletter. Read on to find out what has been happening with the AI Standards Hub over the past few weeks.

Latest blog post: International Trends in AI Governance (Part 1): Hard Regulatory Approaches
Latest blog post: International Trends in AI Governance (Part 1): Hard Regulatory Approaches
In our latest blog post, Arcangelo Leone de Castris discusses findings from our ongoing research on AI governance. The blog is the first in a series exploring emerging international trends in the governance of AI technologies and highlights the main policy initiatives of four jurisdictions that, at present, are pursuing a ‘hard’ regulatory approach to governing AI.
Read the blog post
CDEI portfolio of AI assurance techniques
CDEI portfolio of AI assurance techniques
Earlier this month, the published its new Portfolio of AI Assurance Techniques. The Portfolio, developed in collaboration with techUK is a useful resource for anybody involved in designing, developing, deploying or producing AI systems to better understand the benefits of AI assurance for organisations. The portfolio includes a searchable database of case studies where AI assurance techniques – such as certification against standards – are used to support the development of trustworthy AI. AI assurance techniques and standards will be essential to support the implementation of the five regulatory principles set out by the UK government in its recent AI regulation white paper.
About the portfolio
EU AI Act approaches trilogue negotiations
EU AI Act approaches trilogue negotiations
On 14 June, the European Parliament adopted its negotiating position for the Artificial Intelligence Act, with a continued emphasis on the role of harmonised standards for the implementation of the regulation. The position includes a ban on AI for biometric surveillance, emotion recognition, and predictive policing, as well as obligations on general purpose AI. This position will now feed into the so-called trilogue negotiations between the European Parliament, the European Commission, and the European Council, before a final version of the law is approved, potentially at the end of 2023 or in 2024.
Read more about the Act
Enabling responsible access to demographic data to make AI systems fairer
CDEI Fairness Innovation Challenge: Call for Use Cases
CDEI is planning to run a Fairness Innovation Challenge to support the development of novel solutions to address bias and discrimination across the AI lifecycle. The challenge aims to provide greater clarity about assurance tools and techniques, including standards, to address and improve fairness in AI systems. The Equality & Human Rights Commission and The Information Commissioner’s Office are supporting the challenge. CDEI has launched a call for use cases to help identify real world use cases which could form the basis of specific challenge projects.
Learn how to get involved
Sign up for a user account
Sign up for a user account
Creating a user account on the AI Standards Hub website will enable you to take advantage of a wide range of interactive features. Benefits include the ability to save standards and other database items to your personal dashboard, notifications when followed standards open for comment or move from draft to publication, being able to contribute to our community discussion forums, access to our e-learning modules, and much more!
Set up a user account

Was this newsletter forwarded to you by someone else? Subscribe here to receive future editions directly in your inbox!

Unsubscribe   |   Manage your subscription   |   View online