• Content Type

AI Standards Hub

Global Summit 2026

Building confidence in AI: Standards, measurement and assurance in practice



Date:

16-17 March 2026



Location:

Glasgow and online

AI Standards Hub Global Summit logo

Returning for its second year, the AI Standards Hub Global Summit will dive into the practical dimensions of AI standards, assurance, and measurement to explore their growing role in global AI governance.

About the summit

The AI Standards Hub Global Summit 2026 will bring together UK and global leaders shaping the future of safe, secure, and trustworthy AI. Organised in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, The Data Lab, and Responsible Ai UK, the two-day event will explore how robust standards and credible assurance can build confidence in AI systems, enable innovation, and accelerate the adoption of trustworthy AI.Ā 

The Global Summit 2026 will be held on 16 and 17 March as a hybrid event in Glasgow and online. In-person attendance will be by invitation and sessions will be live streamed for global accessibility.

What to expect

The programme will feature a combination of keynote speeches, expert panels, and interactive sessions designed to advance alignment, knowledge-sharing, and collaboration among key stakeholder groups and decision-makers in pursuit of robust, equitable, and coordinated approaches to AI standards, measurement, and assurance.

Bringing together expert perspectives from standards development bodies, measurement institutes, national governments, intergovernmental organisations, the AI research community, and civil society, the event will provide a unique platform to advance efforts around AI standards-making, measurement, and assurance as key foundations to build confidence in AI systems.

Stylised image of a flower made out of lines

 

Global Summit 2026

Register your interest in the event

Please complete the form below to receive updates about the event as more information becomes available, including instructions on how to join the livestream, and to register interest to attend the event in person.

The data you submit to the AI Standards Hub Global Summit may be shared with our partners for the purposes of administering the event.Ā 

For more information about how we handle your personal data, please see our privacy notice.

Speakers

More information coming soon.

Agenda

Day One Schedule

9:00 am - 9:30 am

See more details

9:30 am - 9:45 am

Summary

Session details forthcoming.

See more details

9:45 am - 10:35 am

Summary

Anchored in the Seoul Statement on Artificial Intelligence, this panel explores how Standards Development Organisations (SDOs) are turning shared global principles into coordinated, practical action for responsible AI. Adopted at the International AI Standards Summit in Seoul in December 2025, the Statement reaffirmed a collective commitment to developing human‑centred, safe, inclusive, and effective international AI standards. This session shines a spotlight on the people across global, regional, and national SDOs who are delivering on those commitments, particularly around inclusion and practical deployment.

See more details

10:35 am - 10:55 am

Networking Break

10:55 am - 11:05 am

Summary

Session details forthcoming.

See more details

11:05 am - 11:20 am

Summary

With the development of harmonised standards for the EU AI Act entering the finishing stretch, the focus of AI standardisation is broadening from trustworthiness and risk mitigation towards quality, including performance. This has the potential to boost market transparency and thus competition as a driver of innovation but also presents the challenge of standardising meaningful metrics across a plethora of domains and use cases. This talk outlines the transition to AI quality as the next frontier, the current state of play, and the gradual reflection in the standardisation landscape.

Speaker
Sebastian Hallensleben
Dr Sebastian Hallensleben
Chair of CEN-CENELEC JTC 21 ā€œArtificial Intelligenceā€, CEN-CENELEC
View bio
See more details

11:20 am - 12:10 pm

Summary

AI evaluation is highly context specific. Risks and performance requirements vary across sectors, applications and deployment environments. This makes national approaches to testing and assurance, and their international alignment, particularly important.

This panel will examine how different countries are developing frameworks for testing and assuring AI systems. It will look at national strategies, regulatory requirements and the technical methods used to support safe and trustworthy AI.

Panellists will outline their countries’ approaches, highlighting early lessons, emerging good practice and the challenges of aligning standards across borders. The session will also consider opportunities for international cooperation to improve interoperability and support responsible innovation.

Each panellist will give a five-minute overview of their national TEVV approach, followed by a moderated discussion.

See more details

12:10 pm - 1:10 pm

Lunch

1:10 pm - 2:55 pm

Summary

This interactive session examines how AI assurance is being implemented in practice today, where gaps persist between expectations and reality, and what is needed to build a credible, effective, and globally aligned AI assurance ecosystem.

It will bring together perspectives from across the AI assurance ecosystem, including third-party assurance providers, AI developers, and deploying organisations, to explore:

  • What credible AI assurance looks like in practice today
  • Where existing standards and frameworks are helping and where gaps remain
  • What capabilities and mechanisms are most urgently needed to strengthen trust and comparability in AI assurance, and how these can be enabled by technical testing.

This session is designed as an evidence-gathering and sense-making exercise, combining expert insights with structured audience input to surface common challenges, gaps, and priorities across sectors.

See more details

1:10 pm - 2:55 pm

Summary

Traditional AI systems typically generate outputs that humans then interpret and act on, so technical evaluation—accuracy, bias, robustness—captures much of what matters. But as AI systems become more autonomous, embedded, and influential, evaluation must expand beyond technical performance. It needs to account for the people who deploy these systems, the communities they affect, the institutional and regulatory contexts they operate within, and the assumptions embedded in data, model design, and implementation choices.

Taking this socio-technical approach calls for collaboration across disciplines and meaningful engagement with end-users and other key stakeholders, so that emerging metrics, standards, and regulation reflect real-world conditions and consequences.

Delivered by Responsible Ai UK, this workshop will bring together experts in AI and applied AI from across the UK to examine evaluation challenges across multiple sectors, including law enforcement, healthcare, and cultural heritage. Leads from major projects will present scenarios, research questions, and early findings from their work. Interactive roundtables will then involve participants in identifying shared concerns and practical priorities, helping to shape recommendations for policymakers, researchers, and industry.

See more details

1:10 pm - 2:00 pm

Summary

AI and AI-powered technologies are now embedded across everyday life and work, from monitoring consumer sleep patterns to shaping employee decision making and organisational strategy. Yet much of today’s human–AI collaboration happens without sufficient awareness, skills, or mechanisms to question, contest, or meaningfully engage with AI outputs. This interactive session explores how AI standards and standards literacy can play a critical role in building human capacity to collaborate with AI systems responsibly and effectively. Grounded in the vision of Industry 5.0, the session brings together diverse perspectives to examine how skills development, education, and standards can empower humans not just to use AI, but to understand, govern, and coexist with it in ways that are fair, sustainable, human-centric, and resilient.

See more details

2:05 pm - 2:55 pm

Summary

The rapid deployment of artificial intelligence across public and private sectors is reshaping how decisions are made about people’s lives, opportunities, and access to services. While AI systems offer significant potential for innovation and efficiency, they also raise serious concerns about the amplification of existing inequalities, the entrenchment of systemic bias, and the erosion of fundamental human rights.

This interactive session explores the intersection of human rights and artificial intelligence, examining how AI technologies can affect individual rights such as equality, non-discrimination, dignity, privacy, and access to remedy. The session will critically assess whether existing legal, regulatory, and standards frameworks are sufficient to protect these rights in the context of increasingly advanced AI systems.

Central to the discussion is the Seoul statement, agreed at the AI Seoul Summit in May 2024, which positions AI safety, innovation, and inclusivity as interdependent goals and calls for enhanced international cooperation to promote human-centric AI and human rights. Drawing on diverse perspectives from ethics, industry, philosophy, civil society, and standards development, the session will explore how the principles of the Seoul Declaration can be operationalised through standards, governance mechanisms, and international collaboration.

See more details

2:55 pm - 3:15 pm

Networking break

3:15 pm - 3:25 pm

Summary

Session details forthcoming.

See more details

3:25 pm - 4:15 pm

Summary

Session details forthcoming.

See more details

4:15 pm - 5:05 pm

Summary

This session will explore the invisible global framework of standards, conformity assessment, metrology, accreditation and market surveillance which underpins the quality and safety of products and services across the economy. It will examine how these elements are crucial to the development of a safe and secure AI assurance framework and look specifically at how the collaborative work of the AIQI Consortium is using the global quality ecosystem to develop a secure, safe and ethical AI governance framework.

See more details

5:05 pm - 5:15 pm

See more details

5:15 pm - 7:15 pm

See more details

Day Two Schedule

9:00 am - 9:10 am

See more details

9:10 am - 9:25 am

Summary

Session details forthcoming.

See more details

9:25 am - 10:15 am

Summary

This panel brings together leading voices in AI governance and standardisation to unpack the evolving landscape of AI risk management. Panellists will explore emerging standards and frameworks for identifying, assessing, and mitigating AI‑related risks, sharing practical insights from real‑world implementation in various deployment contexts. The discussion will also examine common challenges faced by organisations and highlight strategies for building and sustaining effective governance processes.

See more details

10:15 am - 10:30 am

Networking Break

10:30 am - 11:30 am

Summary

Autonomous systems are transforming how we move, but each transport domain has developed distinct safety cultures, regulatory frameworks, and approaches to verification. This panel brings together experts from across the landscape to explore how standards can enable safe deployment while fostering innovation. We will examine how we assure systems that learn and adapt; what different sectors can learn from others’ approaches; and how we develop standards that are rigorous enough to ensure public safety yet flexible enough to accommodate rapidly evolving technology.

See more details

10:30 am - 11:30 am

Summary

Artificial intelligence is increasingly embedded across healthcare systems, from diagnostics and clinical decision support to operational optimisation and patient engagement. While AI offers the potential to improve outcomes, efficiency, and access to care, its adoption in healthcare presents unique challenges related to safety, effectiveness, accountability, trust, and regulation. This sectoral focus panel will examine the practical application of AI standards and assurance mechanisms in healthcare, addressing how standards can support safe deployment, regulatory compliance, and real-world clinical impact. The session will bring together perspectives from clinical practice, health innovation leadership, validation and assurance, and AI development to explore how standards can bridge the gap between innovation and trustworthy use in healthcare settings.

See more details

11:30 am - 12:00 pm

Summary

Session details forthcoming.

See more details

12:00 pm - 1:00 pm

Lunch

1:00 pm - 4:30 pm

Summary

For more information on the programme, please visit the UK Digital Standards Summit event page.

See more details

More information coming soon.

Summit partners

Logo of OECD
Logo of the United Nations Human Rights Office of the High Commission