Welcome to the latest edition of our newsletter. Read on to find out what has been happening with the AI Standards Hub over the past few weeks.

Welcome to our regular newsletter. Read on to find out what has been happening and what is coming up with the AI Standards Hub.

White paper:  A Life Cycle for Trustworthy and Safe Artificial Intelligence Systems
White paper: A Life Cycle for Trustworthy and Safe Artificial Intelligence Systems

As part of the AI Standards Hub’s research programme, the National Physical Laboratory collaborated with other partners on a paper that explores the application of risk in the AI life cycle and how it influences the decisions made during the phases of AI system development. It also highlights the crucial role that measurement can play in quantifying trustworthiness and provides the basis of the technical measurement standards that will drive AI assurance, validation, verification, and testing.

Read the report
Header image with portraits of Hub members and title of podcast feature
Webinar recording available: Towards secure AI for all - Inclusive approaches to AI security standardisation

Our last webinar examined inclusive approaches to AI security governance, with a focus on the role of civil society. Panellists discussed the importance of promoting AI security from the perspective of different stakeholder groups, considered specific challenges for multistakeholder participation in the AI security context, and critically assessed the role of standards in fostering an inclusive approach to AI security governance.

Watch the recording
The word AI spelt out against the image of an integrated circuit
UN High-Level Advisory Body on AI publishes final report

The United Nations Secretary-General’s High-Level Advisory Body on AI recently published its final report titled “Governing AI for Humanity”. The report sets out AI-related risks, considers existing structures, and makes recommendations aimed at addressing gaps and enhancing global cooperation in AI governance, including in the area of standardisation.

Read the report
New blogpost: From algorithms to allergies

According to data from the Met Office, almost one in four adults in England suffer from hay fever. The blog explores how machine learning can improve algorithms that drive predictive technologies designed to estimate pollen count. It also looks at a new project aimed at standardising pollen and fungal spore monitoring.

Read the blog
Image of Earth from above featuring network connections
orange graphic featuring graduation cap with thought bubble and pathways leading away
Training module of the month: Introduction to AI assurance

The course is an introduction to AI assurance and why it is important, who and what are involved, and what it might look like in practice. It is aimed at anyone interested in learning about AI assurance, how it relates to standards and its role in ensuring AI systems are trustworthy.

Explore Introduction to AI assurance
Sign up for a user account

Creating a user account on the AI Standards Hub website will enable you to take advantage of a wide range of interactive features. Benefits include the ability to save standards and other database items to your personal dashboard, notifications when followed standards open for comment or move from draft to publication, being able to contribute to our community discussion forums, access to our e-learning modules, and much more!

Set up a user account
An icon of a group of people having a conversation.

Was this newsletter forwarded to you by someone else? Subscribe here to receive future editions directly in your inbox.

Unsubscribe   |   Manage your subscription   |   View online