Welcome to our regular newsletter. Read on to find out what has been happening and what is coming up with the AI Standards Hub. |
| |
|
Upcoming event: The role of conformity assessment and quality infrastructure in supporting AI safety and regulatory compliance |
Join us on 20 June for a webinar dedicated to the rapidly evolving field of conformity assessment for AI, organised in partnership with the AI Quality Infrastructure Consortium.
Featuring leading experts in AI assurance, the event will discuss the critical role of quality infrastructure and conformity assessment in relation to the EU AI Act as well as other emerging regulatory and safety requirements, providing insights into how conformity assessment helps build trust, ensure compliance, and promote the safe development and use of AI. The session will examine relevant standardisation efforts as well as the relationship between standardisation and other quality infrastructure components in enabling conformity assessments.
|
|
| |
|
AI Standards Hub Global Summit: Recordings available |
We are pleased to share that video recordings of our Global Summit are now available to watch on our website.
Held in March, the Summit examined recent developments, key challenges, and emerging needs to foster global inclusiveness and collaboration in AI standardisation. Combining keynote speeches, expert panels, and lightning talks, the programme featured leading international voices from standards development bodies, national governments, intergovernmental organisations, the AI safety research community, and civil society. Sessions provided in-depth insights into the current state of AI standardisation, the relationship between standards and regulation, strategies for advancing inclusion, and the role of standards in governing foundation models.
Published in the form of two playlists, the recordings provide the opportunity to watch individual sessions or the programme of each day in full.
|
|
| |
|
Recording: A trustworthy AI lifecycle for medical devices |
You can now watch the recording of our webinar on applying NPL’s trustworthy and safe AI lifecycle framework to the detection of atrial fibrillation (AF) using wearable devices equipped with photoplethysmography (PPG) signals. Featuring a presentation and panel discussion, the session considered the importance of risk assessment and trustworthy AI metrics in informing the software development process.
|
|
| |
|
Over 100 new items added to the Standards Database |
We are recently added more than 100 new items to the Hub’s AI Standards Database. Roughly two thirds of the additions are pre-draft standards, providing insight into early-stage work in key AI standards committees.
Among the newly added items are 10 additional CEN-CENELEC JTC 21 projects and nearly 30 proposed IEEE standards covering topics related to foundation models, GPTs, and other LLMs. We have also added published ITU standards on topics like security in connected vehicles and in-development ISO standards on AI safety, quality, and organisational governance. |
|
| |
|
Updated CEN-CENELEC JTC21 status dashboard |
CEN-CENELEC JTC21, the standardisation committee responsible for developing standards to support the implementation of the EU AI Act, has released an updated version of the status dashboard that we shared in last September’s newsletter. The dashboard provides a useful overview of the committee’s work items at a single glance and provides valuable context, including how work items relate to existing international standards and the ten points set out in the European Commission's standardisation request. |
|
| |
New e-learning module: Explainability in AI |
Our training platform has grown to include a new e-learning module. Developed by NPL, this module explores the socio-technical principle of explainability, i.e. the provision of clear and coherent explanations for specific model predictions or decisions. Topics covered include: metrics for trustworthy and safe AI systems, the importance and benefits of explainability in AI systems, various aspects and strategies for evaluation and measurement of explainability, and risks and trade-offs associated with different metrics for trustworthy and safe AI systems. |
|
| |
Sign up for a user account |
Creating a user account on the AI Standards Hub website will enable you to take advantage of a wide range of interactive features. Benefits include the ability to save standards and other database items to your personal dashboard, notifications when followed standards open for comment or move from draft to publication, being able to contribute to our community discussion forums, access to our e-learning modules, and much more!
|
|
| |
Was this newsletter forwarded to you by someone else? Subscribe here to receive future editions directly in your inbox. |
| |
|