AI Standards Hub
Global Summit 2026
Building confidence in AI: Standards, measurement and assurance in practice
Date:
16-17 March 2026
Location:
Glasgow and online
Returning for its second year, the AI Standards Hub Global Summit will dive into the practical dimensions of AI standards, assurance, and measurement to explore their growing role in global AI governance.
About the summit
The AI Standards Hub Global Summit 2026 will bring together UK and global leaders shaping the future of safe, secure, and trustworthy AI. Organised in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, The Data Lab, and Responsible Ai UK, the two-day event will explore how robust standards and credible assurance can build confidence in AI systems, enable innovation, and accelerate the adoption of trustworthy AI.Ā
The Global Summit 2026 will be held on 16 and 17 March as a hybrid event in Glasgow and online. In-person attendance will be by invitation and sessions will be live streamed for global accessibility.
What to expect
The programme will feature a combination of keynote speeches, expert panels, and interactive sessions designed to advance alignment, knowledge-sharing, and collaboration among key stakeholder groups and decision-makers in pursuit of robust, equitable, and coordinated approaches to AI standards, measurement, and assurance.
Bringing together expert perspectives from standards development bodies, measurement institutes, national governments, intergovernmental organisations, the AI research community, and civil society, the event will provide a unique platform to advance efforts around AI standards-making, measurement, and assurance as key foundations to build confidence in AI systems.

Speakers
More information coming soon.
Agenda
Day One Schedule
9:00 am - 9:30 am
Welcome and opening remarks
9:30 am - 9:45 am
Opening keynote from the Office of the United Nations High Commissioner for Human Rights
Summary
Session details forthcoming.
9:45 am - 10:35 am
Operationalising SDO alignment and action for responsible AI standards
Summary
Anchored in the Seoul Statement on Artificial Intelligence, this panel explores how Standards Development Organisations (SDOs) are turning shared global principles into coordinated, practical action for responsible AI. Adopted at the International AI Standards Summit in Seoul in December 2025, the Statement reaffirmed a collective commitment to developing humanācentred, safe, inclusive, and effective international AI standards. This session shines a spotlight on the people across global, regional, and national SDOs who are delivering on those commitments, particularly around inclusion and practical deployment.
10:35 am - 10:55 am
Networking Break
10:55 am - 11:05 am
Scottish government keynote
Summary
Session details forthcoming.
11:05 am - 11:20 am
Keynote: From AI trustworthiness to AI quality ā standards, institutions and innovation
Summary
With the development of harmonised standards for the EU AI Act entering the finishing stretch, the focus of AI standardisation is broadening from trustworthiness and risk mitigation towards quality, including performance. This has the potential to boost market transparency and thus competition as a driver of innovation but also presents the challenge of standardising meaningful metrics across a plethora of domains and use cases. This talk outlines the transition to AI quality as the next frontier, the current state of play, and the gradual reflection in the standardisation landscape.
Speaker

Dr Sebastian Hallensleben
11:20 am - 12:10 pm
National approaches to AI Testing, Evaluation, Verification, and Validation (TEVV)
Summary
AI evaluation is highly context specific. Risks and performance requirements vary across sectors, applications and deployment environments. This makes national approaches to testing and assurance, and their international alignment, particularly important.
This panel will examine how different countries are developing frameworks for testing and assuring AI systems. It will look at national strategies, regulatory requirements and the technical methods used to support safe and trustworthy AI.
Panellists will outline their countriesā approaches, highlighting early lessons, emerging good practice and the challenges of aligning standards across borders. The session will also consider opportunities for international cooperation to improve interoperability and support responsible innovation.
Each panellist will give a five-minute overview of their national TEVV approach, followed by a moderated discussion.
12:10 pm - 1:10 pm
Lunch
1:10 pm - 2:55 pm
Track 1: AI assurance in practice: What works, whatās missing, whatās next
Summary
This interactive session examines how AI assurance is being implemented in practice today, where gaps persist between expectations and reality, and what is needed to build a credible, effective, and globally aligned AI assurance ecosystem.
It will bring together perspectives from across the AI assurance ecosystem, including third-party assurance providers, AI developers, and deploying organisations, to explore:
- What credible AI assurance looks like in practice today
- Where existing standards and frameworks are helping and where gaps remain
- What capabilities and mechanisms are most urgently needed to strengthen trust and comparability in AI assurance, and how these can be enabled by technical testing.
This session is designed as an evidence-gathering and sense-making exercise, combining expert insights with structured audience input to surface common challenges, gaps, and priorities across sectors.
1:10 pm - 2:55 pm
Track 2: Workshop: Socio-technical evaluation of agentic AI
Summary
Traditional AI systems typically generate outputs that humans then interpret and act on, so technical evaluationāaccuracy, bias, robustnessācaptures much of what matters. But as AI systems become more autonomous, embedded, and influential, evaluation must expand beyond technical performance. It needs to account for the people who deploy these systems, the communities they affect, the institutional and regulatory contexts they operate within, and the assumptions embedded in data, model design, and implementation choices.
Taking this socio-technical approach calls for collaboration across disciplines and meaningful engagement with end-users and other key stakeholders, so that emerging metrics, standards, and regulation reflect real-world conditions and consequences.
Delivered by Responsible Ai UK, this workshop will bring together experts in AI and applied AI from across the UK to examine evaluation challenges across multiple sectors, including law enforcement, healthcare, and cultural heritage. Leads from major projects will present scenarios, research questions, and early findings from their work. Interactive roundtables will then involve participants in identifying shared concerns and practical priorities, helping to shape recommendations for policymakers, researchers, and industry.
1:10 pm - 2:00 pm
Track 3: Workshop: Building skills and standards literacy for industry 5.0
Summary
AI and AI-powered technologies are now embedded across everyday life and work, from monitoring consumer sleep patterns to shaping employee decision making and organisational strategy. Yet much of todayās humanāAI collaboration happens without sufficient awareness, skills, or mechanisms to question, contest, or meaningfully engage with AI outputs. This interactive session explores how AI standards and standards literacy can play a critical role in building human capacity to collaborate with AI systems responsibly and effectively. Grounded in the vision of Industry 5.0, the session brings together diverse perspectives to examine how skills development, education, and standards can empower humans not just to use AI, but to understand, govern, and coexist with it in ways that are fair, sustainable, human-centric, and resilient.
2:05 pm - 2:55 pm
Track 4: Workshop: Human rights and AI
Summary
The rapid deployment of artificial intelligence across public and private sectors is reshaping how decisions are made about peopleās lives, opportunities, and access to services. While AI systems offer significant potential for innovation and efficiency, they also raise serious concerns about the amplification of existing inequalities, the entrenchment of systemic bias, and the erosion of fundamental human rights.
This interactive session explores the intersection of human rights and artificial intelligence, examining how AI technologies can affect individual rights such as equality, non-discrimination, dignity, privacy, and access to remedy. The session will critically assess whether existing legal, regulatory, and standards frameworks are sufficient to protect these rights in the context of increasingly advanced AI systems.
Central to the discussion is the Seoul statement, agreed at the AI Seoul Summit in May 2024, which positions AI safety, innovation, and inclusivity as interdependent goals and calls for enhanced international cooperation to promote human-centric AI and human rights. Drawing on diverse perspectives from ethics, industry, philosophy, civil society, and standards development, the session will explore how the principles of the Seoul Declaration can be operationalised through standards, governance mechanisms, and international collaboration.
2:55 pm - 3:15 pm
Networking break
3:15 pm - 3:25 pm
Keynote address from the OECD Directorate for Science, Technology and Innovation
Summary
Session details forthcoming.
3:25 pm - 4:15 pm
Beyond regulation: The governance mechanisms needed for trustworthy AI
Summary
Session details forthcoming.
4:15 pm - 5:05 pm
Ensuring trust in AI: The role of the global quality ecosystem
Summary
This session will explore the invisible global framework of standards, conformity assessment, metrology, accreditation and market surveillance which underpins the quality and safety of products and services across the economy. It will examine how these elements are crucial to the development of a safe and secure AI assurance framework and look specifically at how the collaborative work of the AIQI Consortium is using the global quality ecosystem to develop a secure, safe and ethical AI governance framework.
5:05 pm - 5:15 pm
Closing remarks
5:15 pm - 7:15 pm
Civic Reception hosted by The Rt Hon The Lord Provost of Glasgow and evening celebration of Scotland's heritage.
Day Two Schedule
9:00 am - 9:10 am
Welcome and introduction to the day
9:10 am - 9:25 am
Opening keynote
Summary
Session details forthcoming.
9:25 am - 10:15 am
Building trust in AI: Best practices for identifying and managing risks
Summary
This panel brings together leading voices in AI governance and standardisation to unpack the evolving landscape of AI risk management. Panellists will explore emerging standards and frameworks for identifying, assessing, and mitigating AIārelated risks, sharing practical insights from realāworld implementation in various deployment contexts. The discussion will also examine common challenges faced by organisations and highlight strategies for building and sustaining effective governance processes.
10:15 am - 10:30 am
Networking Break
10:30 am - 11:30 am
Track 1: Autonomous transport and standards
Summary
Autonomous systems are transforming how we move, but each transport domain has developed distinct safety cultures, regulatory frameworks, and approaches to verification. This panel brings together experts from across the landscape to explore how standards can enable safe deployment while fostering innovation. We will examine how we assure systems that learn and adapt; what different sectors can learn from othersā approaches; and how we develop standards that are rigorous enough to ensure public safety yet flexible enough to accommodate rapidly evolving technology.
10:30 am - 11:30 am
Track 2: AI standards and assurance in healthcare
Summary
Artificial intelligence is increasingly embedded across healthcare systems, from diagnostics and clinical decision support to operational optimisation and patient engagement. While AI offers the potential to improve outcomes, efficiency, and access to care, its adoption in healthcare presents unique challenges related to safety, effectiveness, accountability, trust, and regulation. This sectoral focus panel will examine the practical application of AI standards and assurance mechanisms in healthcare, addressing how standards can support safe deployment, regulatory compliance, and real-world clinical impact. The session will bring together perspectives from clinical practice, health innovation leadership, validation and assurance, and AI development to explore how standards can bridge the gap between innovation and trustworthy use in healthcare settings.
11:30 am - 12:00 pm
CLOSING SESSION
Summary
Session details forthcoming.
12:00 pm - 1:00 pm
Lunch
1:00 pm - 4:30 pm
UK Digital Standards Summit
Summary
For more information on the programme, please visit the UK Digital Standards Summit event page.
More information coming soon.
Summit partners