AI Standards Hub
Global Summit 2026
Building confidence in AI: Standards, measurement and assurance in practice
Date:
16-17 March 2026
Location:
Glasgow and online
Returning for its second year, the AI Standards Hub Global Summit will dive into the practical dimensions of AI standards, assurance, and measurement to explore their growing role in global AI governance.
About the summit
The AI Standards Hub Global Summit 2026 will bring together UK and global leaders shaping the future of safe, secure, and trustworthy AI. Organised in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, Partnership on AI, The Data Lab, and Responsible Ai UK, the two-day event will explore how robust standards and credible assurance can build confidence in AI systems, enable innovation, and accelerate the adoption of trustworthy AI.
The Global Summit 2026 will be held on 16 and 17 March as a hybrid event in Glasgow and online. In-person attendance will be by invitation and sessions will be live streamed for global accessibility.
The summit committee wish to acknowledge both the Glasgow City Council and the Glasgow Convention Bureau for their involvement and support of this summit.


What to expect
The programme will feature a combination of keynote speeches, expert panels, and interactive sessions designed to advance alignment, knowledge-sharing, and collaboration among key stakeholder groups and decision-makers in pursuit of robust, equitable, and coordinated approaches to AI standards, measurement, and assurance.
Bringing together expert perspectives from standards development bodies, measurement institutes, national governments, intergovernmental organisations, the AI research community, and civil society, the event will provide a unique platform to advance efforts around AI standards-making, measurement, and assurance as key foundations to build confidence in AI systems.

Register to attend the event online
Please note that in-person registration has now closed. Sessions held in theĀ Plenary RoomĀ will be available via live stream. Please follow the link below to register for online participation:
https://www.eventsforce.net/turingevents/ctb/BPTCGW
Speakers

Peggy Hicks

Luis Aranda

Gilles Thonet

Ultan Mulligan

Dr Sebastian Hallensleben

Anneke Auer-Olvera

Barbara Glover

Sarvapali (Gopal) Ramchurn

Prof Catherine RƩgis

Matt Gantley

Eva Ignatuschtschenko

Adam Leon Smith

Patricia Shaw

Sahar Danesh

Harry Hothi

Radu Calinescu

Simon Burton

Simone Stumpf

Florian Ostmann

Avtar Benning

Thomas Doms

Michaela Coetsee

James Gealy

Rania Wazir

Dr Jacqui Taylor

Ana Alania

Chris Barnes

Paul Duncan

Stacie Hoffmann

Yoko Kaneko

Arcangelo Leone De Castris

Moulham Alsuleman

Louise Axon-Jones

Joslyn Barnhart

David Bell

Sundeep Bhandari

Laura Bishop

Chris Boyland

David Cuckow

Suzi Daley

Sue Daley OBE

Gillian Docherty

Duncan Duffy

Sean Duncan

Jesse Dunietz

Tim Engelhardt

Joe Fulwood

Richard Goodwin

Shivani Gupta

Issy Hall

Wan Sie Lee

Darren Lewis

Maria Liakata

Prof. Stephen McArthur

Margarete McGrath

Paul Miller

Chris Nathan

Michel Oliveira de Souza

Michael Orgill

Enrico Panai

Cindy Parokkil

Jacob Pratt

Sara Rendtorff-Smith

Vasileios Rovilos

Adam Sobey

Evdoxia Taka

Clarisse de Vries
Agenda
Day One Schedule
9:00 am - 9:30 am
Welcome and opening remarks
Summary
Civic Welcome from the Lord Provostās Office followed by welcome to the Summit from the AI Standards Hub partners.

Plenary Room
Speakers

Sundeep Bhandari

Suzi Daley

Chris Nathan

David Cuckow
9:30 am - 9:45 am
Opening keynote from the Office of the United Nations High Commissioner for Human Rights
Summary
Plenary Room
Speaker

Peggy Hicks
9:45 am - 10:35 am
Operationalising SDO alignment and action for responsible AI standards
Summary
Anchored in the Seoul Statement on Artificial Intelligence, this panel explores how Standards Development Organisations (SDOs) are turning shared global principles into coordinated, practical action for responsible AI. Adopted at the International AI Standards Summit in Seoul in December 2025, the Statement reaffirmed a collective commitment to developing humanācentred, safe, inclusive, and effective international AI standards. This session shines a spotlight on the people across global, regional, and national SDOs who are delivering on those commitments, particularly around inclusion and practical deployment.
Plenary Room
Speakers

Gilles Thonet

Ultan Mulligan

Cindy Parokkil

David Cuckow

Gillian Docherty
10:35 am - 10:55 am
Networking Break
10:55 am - 11:05 am
Opening address and comments from the Scottish Government
Summary
Introduction to the Scottish ecosystem followed by a Keynote presentation from Chris Boyland, Scottish Governmentās Head of AI and Digital Growth discussing the AI capabilities and opportunities for growth in Scotland.
Plenary Room
Speaker

Chris Boyland
11:05 am - 11:20 am
Keynote: From AI trustworthiness to AI quality ā standards, institutions and innovation
Summary
With the development of harmonised standards for the EU AI Act entering the finishing stretch, the focus of AI standardisation is broadening from trustworthiness and risk mitigation towards quality, including performance. This has the potential to boost market transparency and thus competition as a driver of innovation but also presents the challenge of standardising meaningful metrics across a plethora of domains and use cases. This talk outlines the transition to AI quality as the next frontier, the current state of play, and the gradual reflection in the standardisation landscape.
Plenary Room
Speaker

Dr Sebastian Hallensleben
11:20 am - 12:10 pm
National approaches to AI Testing, Evaluation, Verification, and Validation (TEVV)
Summary
AI evaluation is highly context specific. Risks and performance requirements vary across sectors, applications and deployment environments. This makes national approaches to testing and assurance, and their international alignment, particularly important.
This panel will examine how different countries are developing frameworks for testing and assuring AI systems. It will look at national strategies, regulatory requirements and the technical methods used to support safe and trustworthy AI.
Panellists will outline their countriesā approaches, highlighting early lessons, emerging good practice and the challenges of aligning standards across borders. The session will also consider opportunities for international cooperation to improve interoperability and support responsible innovation.
Each panellist will give a five-minute overview of their national TEVV approach, followed by a moderated discussion.
Plenary Room
Speakers

Paul Duncan

Stacie Hoffmann

Anneke Auer-Olvera

Wan Sie Lee

Jesse Dunietz
12:10 pm - 1:10 pm
Lunch
1:10 pm - 2:55 pm
Track 1: AI assurance in practice: What works, whatās missing, whatās next
Summary
This interactive session examines how AI assurance is being implemented in practice today, where gaps persist between expectations and reality, and what is needed to build a credible, effective, and globally aligned AI assurance ecosystem.
It will bring together perspectives from across the AI assurance ecosystem, including third-party assurance providers, AI developers, and deploying organisations, to explore:
- What credible AI assurance looks like in practice today
- Where existing standards and frameworks are helping and where gaps remain
- What capabilities and mechanisms are most urgently needed to strengthen trust and comparability in AI assurance, and how these can be enabled by technical testing.
This session is designed as an evidence-gathering and sense-making exercise, combining expert insights with structured audience input to surface common challenges, gaps, and priorities across sectors.
Plenary Room
Speakers

Chris Barnes

Ana Alania

Dr Sebastian Hallensleben

Jacob Pratt

Avtar Benning

Michaela Coetsee

Richard Goodwin
1:10 pm - 2:55 pm
Track 2: Workshop: Socio-technical evaluation of agentic AI
Summary
Traditional AI systems typically generate outputs that humans then interpret and act on, so technical evaluationāaccuracy, bias, robustnessācaptures much of what matters. But as AI systems become more autonomous, embedded, and influential, evaluation must expand beyond technical performance. It needs to account for the people who deploy these systems, the communities they affect, the institutional and regulatory contexts they operate within, and the assumptions embedded in data, model design, and implementation choices.
Taking this socio-technical approach calls for collaboration across disciplines and meaningful engagement with end-users and other key stakeholders, so that emerging metrics, standards, and regulation reflect real-world conditions and consequences.
Delivered by Responsible Ai UK, this workshop will bring together experts in AI and applied AI from across the UK to examine evaluation challenges across multiple sectors, including law enforcement, healthcare, and cultural heritage. Leads from major projects will present scenarios, research questions, and early findings from their work. Interactive roundtables will then involve participants in identifying shared concerns and practical priorities, helping to shape recommendations for policymakers, researchers, and industry.
Carron Room
Speakers

Sarvapali (Gopal) Ramchurn

Simone Stumpf

Evdoxia Taka

Radu Calinescu

Maria Liakata
1:10 pm - 2:00 pm
Track 3a: Workshop: Building skills and standards literacy for industry 5.0
Summary
AI and AI-powered technologies are now embedded across everyday life and work, from monitoring consumer sleep patterns to shaping employee decision making and organisational strategy. Yet much of todayās humanāAI collaboration happens without sufficient awareness, skills, or mechanisms to question, contest, or meaningfully engage with AI outputs. This interactive session explores how AI standards and standards literacy can play a critical role in building human capacity to collaborate with AI systems responsibly and effectively. Grounded in the vision of Industry 5.0, the session brings together diverse perspectives to examine how skills development, education, and standards can empower humans not just to use AI, but to understand, govern, and coexist with it in ways that are fair, sustainable, human-centric, and resilient.
Dochart Room
Speakers

Laura Bishop

Shivani Gupta

Prof. Stephen McArthur

Margarete McGrath
2:05 pm - 2:55 pm
Track 3b: Workshop: Human rights and AI
Summary
The rapid deployment of artificial intelligence across public and private sectors is reshaping how decisions are made about peopleās lives, opportunities, and access to services. While AI systems offer significant potential for innovation and efficiency, they also raise serious concerns about the amplification of existing inequalities, the entrenchment of systemic bias, and the erosion of fundamental human rights.
This interactive session explores the intersection of human rights and artificial intelligence, examining how AI technologies can affect individual rights such as equality, non-discrimination, dignity, privacy, and access to remedy. The session will critically assess whether existing legal, regulatory, and standards frameworks are sufficient to protect these rights in the context of increasingly advanced AI systems.
Central to the discussion is the Seoul statement, agreed at the AI Seoul Summit in May 2024, which positions AI safety, innovation, and inclusivity as interdependent goals and calls for enhanced international cooperation to promote human-centric AI and human rights. Drawing on diverse perspectives from ethics, industry, philosophy, civil society, and standards development, the session will explore how the principles of the Seoul Declaration can be operationalised through standards, governance mechanisms, and international collaboration.
Dochart Room
Speakers

Sahar Danesh

Patricia Shaw

Rania Wazir

Enrico Panai

Tim Engelhardt
2:55 pm - 3:15 pm
Networking break
3:15 pm - 3:25 pm
Keynote address from the OECD Directorate for Science, Technology and Innovation
Summary
Plenary Room
Speaker

Sara Rendtorff-Smith
3:25 pm - 4:15 pm
AI regulation and its limits: A holistic look at the AI governance toolbox
Summary
A look at the evolving landscape of AI regulation around the world not only reveals significant differences in the approaches taken in different jurisdictions but also brings to light important needs and objectives that AI regulation cannot or may not be best placed to address. This panel will take a holistic look at the AI governance toolbox and explore the relationship between AI regulation, standards, and other AI governance tools. Featuring perspectives from different regions of the world, the discussion will examine relevant developments in AI regulation, the ālimitsā of AI regulation, and the role of other governance tools in relation to these limits.Ā
Plenary Room
Speakers

Florian Ostmann

Luis Aranda

Barbara Glover

Eva Ignatuschtschenko

Jacob Pratt
4:15 pm - 5:05 pm
Ensuring trust in AI: The role of the global quality ecosystem
Summary
This session will explore the invisible global framework of standards, conformity assessment, metrology, accreditation and market surveillance which underpins the quality and safety of products and services across the economy. It will examine how these elements are crucial to the development of a safe and secure AI assurance framework and look specifically at how the collaborative work of the AIQI Consortium is using the global quality ecosystem to develop a secure, safe and ethical AI governance framework.
Plenary Room
Speakers

Adam Leon Smith

Matt Gantley

Thomas Doms

Dr Jacqui Taylor

Rania Wazir
5:05 pm - 5:15 pm
Closing remarks
5:15 pm - 7:15 pm
Civic Reception: Celebrating Scotlandās heritage, hosted by The Rt Hon, The Lord Provost of Glasgow
Summary
An evening celebration with entertainment, drinks and street food


Day Two Schedule
9:00 am - 9:10 am
Welcome and introduction to the day
9:10 am - 9:25 am
Opening keynote
Summary
Plenary Room
Speaker

Prof Catherine RƩgis
9:25 am - 10:15 am
Building trust in AI: Best practices for identifying and managing risks
Summary
This panel brings together leading voices in AI governance and standardisation to unpack the evolving landscape of AI risk management. Panellists will explore emerging standards and frameworks for identifying, assessing, and mitigating AIārelated risks, sharing practical insights from realāworld implementation in various deployment contexts. The discussion will also examine common challenges faced by organisations and highlight strategies for building and sustaining effective governance processes.
Plenary Room
Speakers

Arcangelo Leone De Castris

James Gealy

Sue Daley OBE

Joslyn Barnhart

Vasileios Rovilos
10:15 am - 10:30 am
Networking Break
10:30 am - 11:30 am
Track 1: AI standards and assurance in healthcare
Summary
Artificial intelligence is increasingly embedded across healthcare systems, from diagnostics and clinical decision support to operational optimisation and patient engagement. While AI offers the potential to improve outcomes, efficiency, and access to care, its adoption in healthcare presents unique challenges related to safety, effectiveness, accountability, trust, and regulation. This sectoral focus panel will examine the practical application of AI standards and assurance mechanisms in healthcare, addressing how standards can support safe deployment, regulatory compliance, and real-world clinical impact. The session will bring together perspectives from clinical practice, health innovation leadership, validation and assurance, and AI development to explore how standards can bridge the gap between innovation and trustworthy use in healthcare settings.
Plenary Room
Speakers

Harry Hothi

Sean Duncan

Clarisse de Vries

Moulham Alsuleman
10:30 am - 11:30 am
Track 2: Autonomous transport and standards
Summary
Autonomous systems are transforming how we move, but each transport domain has developed distinct safety cultures, regulatory frameworks, and approaches to verification. This panel brings together experts from across the landscape to explore how standards can enable safe deployment while fostering innovation. We will examine how we assure systems that learn and adapt; what different sectors can learn from othersā approaches; and how we develop standards that are rigorous enough to ensure public safety yet flexible enough to accommodate rapidly evolving technology.
Carron Room
Speakers

Simon Burton

Chris Nathan

Yoko Kaneko

Duncan Duffy

Michael Orgill
10:30 am - 11:30 am
Track 3: Critical National Infrastructure (CNI) and AI Security Research
Summary
Securing AI in UK Critical National Infrastructure: AI is creating major opportunities to improve the efficiency and resilience of UK Critical National Infrastructure (CNI), from predictive maintenance to decision support and planning. Yet AI-related threats are rapidly emerging, both from deploying AI into CNI environments and from attackers using AI to increase their scale and sophistication. Governance, protective controls, and standards are struggling to keep pace. This panel and interactive workshop will examine real-world CNI deployment challenges, deep-diving into key implementation challenges for the upcoming AI Security ETSI/DSIT standard to identify the most material obstacles and best practices to ensure adherence. The session by the Laboratory for AI Security Research (LASR) brings together voices from Industry providers and adopters, government, and academia to produce an actionable overview of key risks and the most important gaps to address for providing security in AI for CNI.Ā
Dochart Room
Speakers

Darren Lewis

Paul Miller

Louise Axon-Jones

Joe Fulwood

Issy Hall
11:30 am - 12:00 pm
CLOSING SESSION
Summary
Moderated by Prof Catherine RĆ©gis, the closing panel will reflect on the Summit’s proceedings, discussing key takeaways and necessary actions based on our experiences over the past two days. Panellists will include CEO’s, directors and senior staff from the AI Standards Hub partner organisations, the National Physical Laboratory, the Alan Turing Institute, BSI and the United Kingdom Accreditation Service.
Plenary Room
Speakers

Prof Catherine RƩgis

Stacie Hoffmann

Adam Sobey

David Bell

Matt Gantley
12:00 pm - 1:00 pm
Lunch
1:15 pm - 6:00 pm
UK Digital Standards Summit
Summary
For more information on the programme, please visit the UK Digital Standards Summit event page.
Plenary Room
As part of the AI Standards Hub summit this year, we are pleased to run an exhibition for our delegates to find out more about our delivery partners and local research institutions.Ā Please join us in the main hall on both days, where the following companies will be on hand to discuss their capabilities, AI programmes and opportunities for collaboration:
| The National Physical Laboratory (NPL) is the UK’s National Metrology Institute (NMI), developing and maintaining the national primary measurement standards, as well as collaborating with other NMIs to maintain the international system of measurement. As a public sector research establishment, we deliver extraordinary impact by providing the measurement capability that underpins the UK’s prosperity and quality of life. We develop the metrology required to ensure the timely and successful deployment of new technologies and work with organisations as they develop and test new products and processes. |
![]() |
The BSI builds trust in digital transformation by convening stakeholders to agree priorities for action, then create and share consensus-based best practice. BSI is an independent, trusted partner to government and industry, and helps protect consumers from digital harms. BSIās work addresses people and processes, as well as technology, to increase understanding, confidence and value for all. As the National Standards Body, BSI also represents the UK on the world stage, playing a leading role in the development and take-up of international standards. and add link |
| The Alan Turing Institute is the UKās national institute for data science and artificial intelligence. The Institute is named in honour of Alan Turing, whose pioneering work in theoretical and applied mathematics, engineering and computing is considered to have laid the foundations for modern-day data science and artificial intelligence. The Instituteās purpose is to make great leaps in data science and AI research to change the world for the better. Its goals are to advance world-class research and apply it to national and global challenges, build skills for the future by contributing to training people across sectors and career stages, and drive an informed public conversation by providing balanced and evidence-based views on data science and AI. |
![]() |
The United Kingdom Accreditation Service (UKAS) is the National Accreditation Body for the UK, as appointed by the UK Government. Its role is to assess that organisations providing conformity assessment services (certification, testing, inspection, calibration and verification) are meeting a required standard of performance. UKAS accreditation demonstrates an organisationās competence, impartiality and performance capability against nationally and internationally recognised standards. For further information please visit www.ukas.com. |
| The goals within the AdSoLve project are two-fold: Firstly, to create an extensive evaluation framework (including benchmarks with suitable novel criteria, metrics and tasks as well as transparent reference-free evaluation) for assessing the limitations of LLMs in real world settings, particularly in medical and legal applications. To make sure the proposed evaluation framework covers the right criteria, the AdSoLve team is working closely with legal and medical professionals and other stakeholders who shape the development and adoption of LLM products and services. This involves both a programme of co-creation workshops and interviews with key informants to identify key requirements for these products and services as well as development of novel AI methodology for assessing the quality of generated content beyond traditional benchmarks and metrics. The second goal of the project is to devise novel mitigating solutions to address identified LLM limitations, based on new machine learning methodology and informed by expertise in law, ethics and healthcare, via co-creation with domain experts, that can be incorporated in products and services. The methodology includes development of modules for temporal reasoning and situational awareness in long-form text, dialogue and multi-modal data, as well as alignment with human-preferences, bias reduction and privacy preservation. |
| PHAWMArtificial intelligence (AI) applications have become ubiquitous in their impact on individuals and society, highlighting a crucial need for their responsible development. Recent research has called for participatory AI auditing, empowering individuals without AI expertise to audit AI applications throughout the entire AI development pipeline. The RAi UK funded Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, a collaboration of the Universities of Glasgow, Edinburgh, Sheffield, Stirling, Strathclyde, York and Kingās College London, focuses on investigating how to support these kinds of auditors through participatory AI auditing tools and processes. We are pleased to demonstrate our work at the AI Standards Hub |
![]() |
The Laboratory for AI Security Research (LASR) is a collaboration between the public and private sectors in the UK to bring together the best minds in AI security. LASR is dedicated to mitigating security risks to and from artificial intelligence (AI) to strengthen national security and support economic growth. Launched in November 2024 at the Nato Cyber Defence Conference, the initiative brings together world-leading experts from UK organisations including Plexal, University of Oxford, The Alan Turing Institute, Queenās University Belfast and the UK Government, alongside a broad network of academic, industry, and international partners. LASR conducts cutting-edge research at the intersection of AI and cyber security, develop novel capabilities and skills, accelerate research commercialisation, and foster international collaboration for the secure development and deployment of AI. |
![]() |
The Department for Science, Innovation and Technology (DSIT) leads the Governmentās strategic engagement on digital technical standards. It works with the multistakeholder community to shape the development of global digital technical standards in the areas that matter most for upholding our democratic values, ensuring our cyber security, and advancing UK strategic interests through science and technology.
|
|
The National Manufacturing Institute Scotland (NMIS) is a trusted R&D partner helping manufacturers across the UK de-risk innovation, strengthen supply chains, and accelerate the adoption of advanced technologies. Operated by the University of Strathclyde and part of the High Value Manufacturing (HVM) Catapult, NMIS supports businesses through expertise in digital & data driven manufacturing, materials & process engineering, advanced manufacturing, and circularity, including exploring technologies such as artificial intelligence and advanced data to improve productivity and efficiency. |
| The Data Lab is Scotlandās Innovation Centre for Data and AI At The Data Lab, we believe in a thriving, connected society powered by data and AI ā where cross-sector conversations, creativity, and collaboration ignite and sustain the sparks of ground-breaking ideas, products and societal advances. Weāre uniquely positioned to support and incubate responsible innovation in data and AI across the public, private and academic sectors. We work with students, professionals, universities and colleges, businesses and our wider community to build a stronger and more collaborative economy and future for Scotland and the world. |
| Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society. By convening diverse, international stakeholders, we seek to pool collective wisdom to make change. We are not a trade group or advocacy organization. We develop tools, recommendations, and other resources by inviting voices from across the AI community and beyond to share insights that can be synthesized into actionable guidance. We then work to drive adoption in practice, inform public policy, and advance public understanding. Through dialogue, research, and education, PAI is addressing the most important and difficult questions concerning the future of AI. |
Summit partners



