|
Welcome to our regular newsletter. Read on to find out what has been happening and what is coming up with the AI Standards Hub. |
| |
|
|
Upcoming event: ETSI EN 304 223 – Draft standard consultation session |
|
Join us on 6 November for a virtual session focused on ETSI EN 304 223, a draft standard that sets out baseline cybersecurity requirements for artificial intelligence systems. The interactive event is aimed at stakeholders across government, industry, civil society, and academia interested in providing feedback on the draft standard. We will be joined by Issy Hall, the rapporteur for this ETSI work item, who will provide an overview of the standard’s relevance to UK AI security policy, followed by a discussion of key sections of the document to identify possible gaps, implementation challenges, and links to other standards initiatives.
|
|
|
| |
|
|
Report publication: A trustworthy and safe AI lifecycle case study |
|
This recently published report provides an in-depth look at NPL’s work – featured in our webinar earlier this year – on applying the trustworthy and safe AI lifecycle framework to the case of detection of atrial fibrillation using a wearable AI-based medical device equipped with a photoplethysmography signal sensor. The report describes specific risks and mitigations, outlines the most relevant aspects of AI trustworthiness and the associated software engineering challenges, and describes metrics that can be used to quantify some of these aspects of AI trustworthiness. As well as representing a useful case study, the methodology described in the report can be applied to numerous other AI systems.
|
|
|
| |
|
|
Research series: AI governance around the world |
|
The AI Governance Around the World project presents research by the Alan Turing Institute, mapping the evolving international AI governance landscape through a series of country profiles. Each profile provides an overview of a jurisdiction’s approach to AI regulation and standardisation, highlighting the high-level aims and principles, definitions of relevant technologies, key policy initiatives, and the main features of the respective standardisation systems. The first set of profiles, now available, cover the European Union, India, Singapore, and Canada. As further profiles are added, the project will shed light on where international approaches are converging and where greater coordination may be needed.
|
|
|
| |
|
|
Blog series: Lessons from businesses adopting ISO/IEC 42001 |
|
ISO/IEC 42001, the world’s first international AI Management System standard, has been a recurring topic in our events and publications. In a new blog post series, Julian Adams (BSI) shares insights from qualitative research with early adopters of the standard. The research engaged with compliance officers, information security managers, engineers, and technical leads to ascertain organisations’ experiences working with the standard. The first two blog posts in the series are available now.
|
|
|
| |
|
|
Recording available: The role of conformity assessment and quality infrastructure in supporting AI safety and regulatory compliance |
|
The recording of our last webinar, held in partnership with the AI Quality Infrastructure Consortium (AIQI), is available to watch on our website. Featuring presentations, a panel discussion, and an audience Q&A, the event offered in-depth insights into how conformity assessment and quality infrastructure can support AI compliance, foster trust, and promote safe innovation.
|
|
|
| |
|
Featured e-learning module: Explainability in AI |
|
Developed by NPL, this module explores the socio-technical principle of explainability, i.e. the provision of clear and coherent explanations for specific model predictions or decisions. Topics covered include metrics for trustworthy and safe AI systems, the importance and benefits of explainability in AI systems, various aspects and strategies for evaluation and measurement of explainability, and risks and trade-offs associated with different metrics for trustworthy and safe AI systems.
|
|
|
| |
|
Sign up for a user account |
|
Creating a user account on the AI Standards Hub website will enable you to take advantage of a wide range of interactive features. Benefits include the ability to save standards and other database items to your personal dashboard, notifications when followed standards open for comment or move from draft to publication, being able to contribute to our community discussion forums, access to our e-learning modules, and much more!
|
|
|
| |
|
Was this newsletter forwarded to you by someone else? Subscribe here to receive future editions directly in your inbox. |
| |
|