• Content Type

AI Standards Hub

Global Summit 2026

Building confidence in AI: Standards, measurement and assurance in practice



Date:

16-17 March 2026



Location:

Glasgow and online

AI Standards Hub Global Summit logo

Returning for its second year, the AI Standards Hub Global Summit will dive into the practical dimensions of AI standards, assurance, and measurement to explore their growing role in global AI governance.

About the summit

The AI Standards Hub Global Summit 2026 will bring together UK and global leaders shaping the future of safe, secure, and trustworthy AI. Organised in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, The Data Lab, and Responsible Ai UK, the two-day event will explore how robust standards and credible assurance can build confidence in AI systems, enable innovation, and accelerate the adoption of trustworthy AI.

The Global Summit 2026 will be held on 16 and 17 March as a hybrid event in Glasgow and online. In-person attendance will be by invitation and sessions will be live streamed for global accessibility.

The summit committee wish to acknowledge both the Glasgow City Council and the Glasgow Convention Bureau for their involvement and support of this summit.

What to expect

The programme will feature a combination of keynote speeches, expert panels, and interactive sessions designed to advance alignment, knowledge-sharing, and collaboration among key stakeholder groups and decision-makers in pursuit of robust, equitable, and coordinated approaches to AI standards, measurement, and assurance.

Bringing together expert perspectives from standards development bodies, measurement institutes, national governments, intergovernmental organisations, the AI research community, and civil society, the event will provide a unique platform to advance efforts around AI standards-making, measurement, and assurance as key foundations to build confidence in AI systems.

Stylised image of a flower made out of lines

 

Global Summit 2026

Register your interest in the event

Please complete the form below to receive updates about the event as more information becomes available, including instructions on how to join the livestream, and to register interest to attend the event in person.

The data you submit to the AI Standards Hub Global Summit may be shared with our partners for the purposes of administering the event.Ā 

For more information about how we handle your personal data, please see our privacy notice.

Speakers

Ana Alania

Ana Alania

AI Programme Manager, National Physical Laboratory (NPL)
AnaĀ Alania isĀ anĀ AI Programme Manager at the National Physical Laboratory (NPL), where sheĀ is responsible forĀ the strategic developmentĀ and delivery of NPL's programmes in AI.
View bio
Luis Aranda

Luis Aranda

Senior Economist - Artificial Intelligence, OECD
Luis Aranda is a Senior Economist in Artificial Intelligence at the OECD, which he joined in 2017. In this role, Luis has contributed to the scoping of the OECD AI Principles and the creation of the OECD.AI Policy Observatory and its network of experts.
View bio
Anneke  Auer-Olvera

Anneke Auer-Olvera

Director, Programs, AI and Data Governance, , Standards Council of Canada
Anneke Auer-Olvera is Director of Programs at the Standards Council of Canada (SCC), where she leads national initiatives in artificial intelligence, data governance, infrastructure, and climate resilience.
View bio
Chris Barnes

Chris Barnes

Head of Science of AI, National Physical Laboratory (NPL)
Chris Barnes is Head of Science of AI at the National Physical Laboratory (NPL), where he leads strategic research into AI uncertainty quantification, trustworthy AI, and metrics for data quality.
View bio
Avtar  Benning

Avtar Benning

Director, AI Assurance, Deloitte
Avtar Benning has 15 years of experience across consulting and financial services in various quantitative roles specialising in Data, Analytics and AI.
View bio
Simon Burton

Simon Burton

Chair of Systems Safety, University of York
Professor Simon Burton, PhD, holds the Chair of Systems Safety at the University of York, UK and is also Business Director of the Centre for Assuring Autonomy.
View bio
Radu  Calinescu

Radu Calinescu

Professor of Computer Science, University of York
Radu Calinescu is Professor of Computer Science at the University of York, UK. His interdisciplinary research on these topics has been supported by grants totalling over Ā£25M from funders including UKRI, Lloyd’s Register Foundation, ARIA, Dstl, UK Atomic Energy Authority, and Microsoft Research.
View bio
Arcangelo Leone De Castris Castris

Arcangelo Leone De Castris Castris

Research Manager, The Alan Turing Institute
Arcangelo Leone De Castris is a Research Manager in the Public Policy Programme and is currently leading the Turing Institute’s AI governance offerings for BridgeAI as well as supporting the development of the AI Standards Hub.
View bio
Michaela  Coetsee

Michaela Coetsee

AI Ethics and Assurance Lead, Advai
Michaela Coetsee specialises in AI ethics and AI assurance at Advai, a leading third party AI testing and assurance company. She has a background in psychology and data science bringing a critical sociotechnical perspective to the conversation.
View bio
Thomas Doms

Thomas Doms

Global Product Lead AI Services - Managing Director , TRUSTIFAI, TÜV AUSTRIA HoldingAG
Thomas Doms, after studying economics, worked for more than 20 years in various management positions in international companies such as Raab Karcher, TÜV Rheinland and T-Systems in the areas of business development, innovation management, IT strategy and information security.
View bio
Paul Duncan

Paul Duncan

Principal Scientist , National Physical Laboratory (NPL)
DrĀ PaulĀ DuncanĀ is a Principal Scientist at the UK's National Physical Laboratory (NPL), where he leads the Informatics group within the Data Science & AI department.
View bio
Sean  Duncan

Sean Duncan

Clinical Research Fellow, Digital Health Validation Lab (DHVL)
Dr Sean Duncan is a medical doctor and senior Clinical Innovation Fellow at the Digital Health Validation Lab, University of Glasgow.
View bio
Matt  Gantley

Matt Gantley

Chief Executive Officer, United Kingdom Accreditation Service
Matt Gantley is the CEO of UKAS, a leading global Accreditation Body and major contributor to the UK and global quality infrastructure. With over 25 years of experience in conformity assessment, Matt has a distinguished career in the TIC Sector.
View bio
James Gealy

James Gealy

Standardization Lead, Safer AI
James Gealy is Standardization Lead at SaferAI. He is co-project leader of EN AI Risk Management at CEN-CLC JTC 21, and editor of ISO/IEC TS 42119-8 addressing benchmarking and red-teaming of advanced LLMs.
View bio
Barbara  Glover

Barbara Glover

Programme Officer, African Union (AUDA-NEPAD)
Barbara Glover is a dedicated advocate for science, technology, and innovation in Africa. With a background in innovation and emerging technologies and extensive experience in research and policy.
View bio
Shivani Gupta

Shivani Gupta

Senior Policy Advisor, Confederation of British Industry
Shivani Gupta is a Senior Policy Advisor in the Technology and Innovation team at the CBI, where she leads CBI’s policy work on artificial intelligence and the digital economy.
View bio
Sebastian Hallensleben

Dr Sebastian Hallensleben

Chair of CEN-CENELEC JTC 21 ā€œArtificial Intelligenceā€, CEN-CENELEC
DrĀ SebastianĀ Hallensleben is the Chair of CEN-CENELEC JTC 21 where EuropeanĀ AI standards to underpin EU regulation are being developed,Ā and co-chairs the AI riskĀ and accountability work at OECD.Ā 
View bio
Peggy Hicks

Peggy Hicks

Director of the Thematic and Special Procedures Division, OHCHR
PeggyĀ Hicks hasĀ served as director of the ThematicĀ and Special Procedures Division at the UN's Human Rights Office since January 2016. From 2005 to 2015, she was global advocacy director at HumanĀ Rights Watch.
View bio
Stacie Hoffmann

Stacie Hoffmann

Head of Department, Data Science & AI, National Physical Laboratory (NPL)
Stacie Hoffmann is the Head of Strategic Growth and Department for Data Science and AI at the National Physical Laboratory.
View bio
Harry  Hothi

Harry Hothi

Healthcare Sector Lead, BSI
Harry Hothi is the Healthcare Sector Lead at BSI. He drives the growth and strategic direction of the healthcare portfolio, shaping how BSI supports innovation, safety, and compliance across the medical technology sector, with a particular focus on digital health, AI and medical devices.
View bio
Eva Ignatuschtschenko

Eva Ignatuschtschenko

Director of Technology Insight, Competition and Markets Authority
EvaĀ IgnatuschtschenkoĀ is the CMA’s Director of Technology Insight, helping the CMA understand the technologies transforming digital markets, from AI to complex tech ecosystems.
View bio
Yoko  Kaneko

Yoko Kaneko

Assured Autonomy Strategy and Business Development Manager, National Physical Laboratory (NPL)
Yoko Kaneko is a Strategy and Business Development Manager for the Assured Autonomy programme as a part of the resilience and security national challenge at NPL.
View bio
 Ultan  Mulligan

Ultan Mulligan

Chief Services Officer, ETSI
Mr. Ultan Mulligan is Chief Services Officer at ETSI, responsible for ETSI’s software and Open Source groups, testing and interoperability, preparation and publication of standards, and new tools and working methods.
View bio
Michael  Orgill

Michael Orgill

CAV Project Engineer, HORIBA MIRA Ltd.
Michael Orgill is an expert in the testing, verification, and validation of Connected and Automated Vehicles (CAV). Michael has also worked with OEMs and manufacturers across the civilian automotive sector to enhance the safety of Level 0-2 Advanced Driver Assistance Systems (ADAS).
View bio
Florian Ostmann

Florian Ostmann

Distinguished Policy Fellow, London School of Economics and Political Science
DrĀ FlorianĀ Ostmann is a Distinguished Policy Fellow in the Data Science Institute at the London School of EconomicĀ and Political Science. Before joining LSE,Ā FlorianĀ was Director of AI Governance and Regulatory Innovation at The AlanĀ Turing Institute, leading the work on the AIĀ Standards Hub.
View bio
Sarvapali (Gopal) Ramchurn

Sarvapali (Gopal) Ramchurn

RAi UK CEO, Professor of Artificial Intelligence, University of Southampton
Sarvapali (Gopal) Ramchurn FIET is Professor of Artificial Intelligence and CEO of the £31M Responsible Ai UK programme. He was also Director of the UKRI Trustworthy Autonomous Systems (TAS) Hub, sitting at the centre of the £33M Trustworthy Autonomous Systems Programme.
View bio
Catherine  RƩgis

Catherine RƩgis

Professor, University of Montreal/Mila/
Catherine RƩgis is a Full professor at the Faculty of Law of UniversitƩ de MontrƩal and holds a Canada CIFAR Chair in AI and Human Rights as well as a Chair on Science Diplomacy and Global Governance of AI (FRQ).
View bio
Patricia  Shaw

Patricia Shaw

CEO, Global AI Governance and Standards Advisor, Lawyer, Beyond Reach Consulting Limited
Patricia Shaw is CEO of Beyond Reach Consulting Limited, AI strategic advisor, working globally at the intersection of human rights, law, policy, governance, technology and ethics.
View bio
Adam Leon  Smith

Adam Leon Smith

Chair, AIQI Consortium
Adam Leon Smith is an AI regulatory and technical expert specialising in EU AI Act compliance and risk management. He is Chair of AIQI Consortium, a global initiative to promote the use of the quality infrastructure for responsible AI, and Deputy Chair of the UK’s national AI standards committee.
View bio
Simone  Stumpf

Simone Stumpf

Professor of Responsible and Interactive AI, University of Glasgow
Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with AI systems.
View bio
Evdoxia  Taka

Evdoxia Taka

Research Associate, University of Glasgow
Dr Evdoxia Taka is a Research Associate in the School of Computing Science, University of Glasgow. She received her PhD in Computing Science from the same University with the topic "Interactive Animated Visualizations of Probabilistic Models".
View bio
Jacqui  Taylor

Dr Jacqui Taylor

CEO, Co-Founder, FlyingBinary
Dr Jacqui Taylor is CEO of two AI companies, shortlisted for European PICASSO award based on an AI ICO safety collaboration. Recognised as the #15 Most Influential Woman in UK Technology and one of the 21 most Inspiring Women in Cyber.Ā 
View bio
Gilles Thonet

Gilles Thonet

Deputy Secretary-General, IEC
Gilles Thonet is the Deputy Secretary-General of the International Electrotechnical Commission (IEC) as well as the Secretary of the IEC Standardization Management Board (SMB).
View bio
Clarisse de  Vries

Clarisse de Vries

Lecturer in Data Science, University of Glasgow
Dr Clarisse de Vries is a Lecturer in Data Science at the University of Glasgow, specialising in the development and evaluation of Artificial Intelligence (AI) for breast cancer screening.
View bio
Rania Wazir

Rania Wazir

Co-founder & CTO, Leiwand.ai
Rania Wazir, Ph.D., is co-founder and CTO of leiwand.ai – a startup dedicated to supporting companies and organizations with the development and deployment of trustworthy AI. Rania is a data scientist and mathematician.
View bio

 

Agenda

Day One Schedule

9:00 am - 9:30 am

Summary

Civic Welcome by The Depute Lord Provost Bailie Christy Mearns representing the Lord Provost’s office

See more details

9:30 am - 9:45 am

Summary

Session details forthcoming.

See more details

9:45 am - 10:35 am

Summary

Anchored in the Seoul Statement on Artificial Intelligence, this panel explores how Standards Development Organisations (SDOs) are turning shared global principles into coordinated, practical action for responsible AI. Adopted at the International AI Standards Summit in Seoul in December 2025, the Statement reaffirmed a collective commitment to developing human‑centred, safe, inclusive, and effective international AI standards. This session shines a spotlight on the people across global, regional, and national SDOs who are delivering on those commitments, particularly around inclusion and practical deployment.

See more details

10:35 am - 10:55 am

Networking Break

10:55 am - 11:05 am

Summary

Session details forthcoming.

See more details

11:05 am - 11:20 am

Summary

With the development of harmonised standards for the EU AI Act entering the finishing stretch, the focus of AI standardisation is broadening from trustworthiness and risk mitigation towards quality, including performance. This has the potential to boost market transparency and thus competition as a driver of innovation but also presents the challenge of standardising meaningful metrics across a plethora of domains and use cases. This talk outlines the transition to AI quality as the next frontier, the current state of play, and the gradual reflection in the standardisation landscape.

Speaker
Sebastian Hallensleben
Dr Sebastian Hallensleben
Chair of CEN-CENELEC JTC 21 ā€œArtificial Intelligenceā€, CEN-CENELEC
View bio
See more details

11:20 am - 12:10 pm

Summary

AI evaluation is highly context specific. Risks and performance requirements vary across sectors, applications and deployment environments. This makes national approaches to testing and assurance, and their international alignment, particularly important.

This panel will examine how different countries are developing frameworks for testing and assuring AI systems. It will look at national strategies, regulatory requirements and the technical methods used to support safe and trustworthy AI.

Panellists will outline their countries’ approaches, highlighting early lessons, emerging good practice and the challenges of aligning standards across borders. The session will also consider opportunities for international cooperation to improve interoperability and support responsible innovation.

Each panellist will give a five-minute overview of their national TEVV approach, followed by a moderated discussion.

See more details

12:10 pm - 1:10 pm

Lunch

1:10 pm - 2:55 pm

Summary

This interactive session examines how AI assurance is being implemented in practice today, where gaps persist between expectations and reality, and what is needed to build a credible, effective, and globally aligned AI assurance ecosystem.

It will bring together perspectives from across the AI assurance ecosystem, including third-party assurance providers, AI developers, and deploying organisations, to explore:

  • What credible AI assurance looks like in practice today
  • Where existing standards and frameworks are helping and where gaps remain
  • What capabilities and mechanisms are most urgently needed to strengthen trust and comparability in AI assurance, and how these can be enabled by technical testing.

This session is designed as an evidence-gathering and sense-making exercise, combining expert insights with structured audience input to surface common challenges, gaps, and priorities across sectors.

See more details

1:10 pm - 2:55 pm

Summary

Traditional AI systems typically generate outputs that humans then interpret and act on, so technical evaluation—accuracy, bias, robustness—captures much of what matters. But as AI systems become more autonomous, embedded, and influential, evaluation must expand beyond technical performance. It needs to account for the people who deploy these systems, the communities they affect, the institutional and regulatory contexts they operate within, and the assumptions embedded in data, model design, and implementation choices.

Taking this socio-technical approach calls for collaboration across disciplines and meaningful engagement with end-users and other key stakeholders, so that emerging metrics, standards, and regulation reflect real-world conditions and consequences.

Delivered by Responsible Ai UK, this workshop will bring together experts in AI and applied AI from across the UK to examine evaluation challenges across multiple sectors, including law enforcement, healthcare, and cultural heritage. Leads from major projects will present scenarios, research questions, and early findings from their work. Interactive roundtables will then involve participants in identifying shared concerns and practical priorities, helping to shape recommendations for policymakers, researchers, and industry.

See more details

1:10 pm - 2:00 pm

Summary

AI and AI-powered technologies are now embedded across everyday life and work, from monitoring consumer sleep patterns to shaping employee decision making and organisational strategy. Yet much of today’s human–AI collaboration happens without sufficient awareness, skills, or mechanisms to question, contest, or meaningfully engage with AI outputs. This interactive session explores how AI standards and standards literacy can play a critical role in building human capacity to collaborate with AI systems responsibly and effectively. Grounded in the vision of Industry 5.0, the session brings together diverse perspectives to examine how skills development, education, and standards can empower humans not just to use AI, but to understand, govern, and coexist with it in ways that are fair, sustainable, human-centric, and resilient.

See more details

2:05 pm - 2:55 pm

Summary

The rapid deployment of artificial intelligence across public and private sectors is reshaping how decisions are made about people’s lives, opportunities, and access to services. While AI systems offer significant potential for innovation and efficiency, they also raise serious concerns about the amplification of existing inequalities, the entrenchment of systemic bias, and the erosion of fundamental human rights.

This interactive session explores the intersection of human rights and artificial intelligence, examining how AI technologies can affect individual rights such as equality, non-discrimination, dignity, privacy, and access to remedy. The session will critically assess whether existing legal, regulatory, and standards frameworks are sufficient to protect these rights in the context of increasingly advanced AI systems.

Central to the discussion is the Seoul statement, agreed at the AI Seoul Summit in May 2024, which positions AI safety, innovation, and inclusivity as interdependent goals and calls for enhanced international cooperation to promote human-centric AI and human rights. Drawing on diverse perspectives from ethics, industry, philosophy, civil society, and standards development, the session will explore how the principles of the Seoul Declaration can be operationalised through standards, governance mechanisms, and international collaboration.

See more details

2:55 pm - 3:15 pm

Networking break

3:15 pm - 3:25 pm

Summary

Session details forthcoming.

See more details

3:25 pm - 4:15 pm

Summary

Session details forthcoming.

See more details

4:15 pm - 5:05 pm

Summary

This session will explore the invisible global framework of standards, conformity assessment, metrology, accreditation and market surveillance which underpins the quality and safety of products and services across the economy. It will examine how these elements are crucial to the development of a safe and secure AI assurance framework and look specifically at how the collaborative work of the AIQI Consortium is using the global quality ecosystem to develop a secure, safe and ethical AI governance framework.

See more details

5:05 pm - 5:15 pm

See more details

5:15 pm - 7:15 pm

Summary

An evening celebration with entertainment, drinks and street food

See more details

Day Two Schedule

9:00 am - 9:10 am

See more details

9:10 am - 9:25 am

Summary

Session details forthcoming.

See more details

9:25 am - 10:15 am

Summary

This panel brings together leading voices in AI governance and standardisation to unpack the evolving landscape of AI risk management. Panellists will explore emerging standards and frameworks for identifying, assessing, and mitigating AI‑related risks, sharing practical insights from real‑world implementation in various deployment contexts. The discussion will also examine common challenges faced by organisations and highlight strategies for building and sustaining effective governance processes.

See more details

10:15 am - 10:30 am

Networking Break

10:30 am - 11:30 am

Summary

Autonomous systems are transforming how we move, but each transport domain has developed distinct safety cultures, regulatory frameworks, and approaches to verification. This panel brings together experts from across the landscape to explore how standards can enable safe deployment while fostering innovation. We will examine how we assure systems that learn and adapt; what different sectors can learn from others’ approaches; and how we develop standards that are rigorous enough to ensure public safety yet flexible enough to accommodate rapidly evolving technology.

See more details

10:30 am - 11:30 am

Summary

Artificial intelligence is increasingly embedded across healthcare systems, from diagnostics and clinical decision support to operational optimisation and patient engagement. While AI offers the potential to improve outcomes, efficiency, and access to care, its adoption in healthcare presents unique challenges related to safety, effectiveness, accountability, trust, and regulation. This sectoral focus panel will examine the practical application of AI standards and assurance mechanisms in healthcare, addressing how standards can support safe deployment, regulatory compliance, and real-world clinical impact. The session will bring together perspectives from clinical practice, health innovation leadership, validation and assurance, and AI development to explore how standards can bridge the gap between innovation and trustworthy use in healthcare settings.

See more details

11:30 am - 12:00 pm

Summary

Session details forthcoming.

See more details

12:00 pm - 1:00 pm

Lunch

1:00 pm - 4:30 pm

Summary

For more information on the programme, please visit the UK Digital Standards Summit event page.

See more details

More information coming soon.

Summit partners

Logo of OECD
Logo of the United Nations Human Rights Office of the High Commission