• Content Type

AI Standards Hub

Global Summit 2026

Building confidence in AI: Standards, measurement and assurance in practice



Date:

16-17 March 2026



Location:

Glasgow and online

AI Standards Hub Global Summit logo

Returning for its second year, the AI Standards Hub Global Summit will dive into the practical dimensions of AI standards, assurance, and measurement to explore their growing role in global AI governance.

About the summit

The AI Standards Hub Global Summit 2026 will bring together UK and global leaders shaping the future of safe, secure, and trustworthy AI. Organised in partnership with the Organisation for Economic Co-operation and Development, the Office of the United Nations High Commissioner for Human Rights, Partnership on AI, The Data Lab, and Responsible Ai UK, the two-day event will explore how robust standards and credible assurance can build confidence in AI systems, enable innovation, and accelerate the adoption of trustworthy AI.

The Global Summit 2026 will be held on 16 and 17 March as a hybrid event in Glasgow and online. In-person attendance will be by invitation and sessions will be live streamed for global accessibility.

The summit committee wish to acknowledge both the Glasgow City Council and the Glasgow Convention Bureau for their involvement and support of this summit.

What to expect

The programme will feature a combination of keynote speeches, expert panels, and interactive sessions designed to advance alignment, knowledge-sharing, and collaboration among key stakeholder groups and decision-makers in pursuit of robust, equitable, and coordinated approaches to AI standards, measurement, and assurance.

Bringing together expert perspectives from standards development bodies, measurement institutes, national governments, intergovernmental organisations, the AI research community, and civil society, the event will provide a unique platform to advance efforts around AI standards-making, measurement, and assurance as key foundations to build confidence in AI systems.

Please be aware that, owing to a recent incident near Glasgow Central station, National Rail services are currently impacted and experiencing delays.
Ā 
The low-level station reopened today, meaning you can now board and alight from trains to the Exhibition Centre stop from Glasgow Central.
Ā 
The main station will remain closed until Sunday inclusive while work is undertaken to make the area around the collapsed building safe.
Ā 
As a result, delegates travelling to Glasgow by train for the Summit are likely to be affected.
Ā 
Avanti West Coast services from London Euston are currently terminating at Motherwell. From there, passengers can take connecting trains into Glasgow city-centre stations, including Argyle Street or Anderston.
Ā 

Stylised image of a flower made out of lines

 

Register to attend the event online

Please note that in-person registration has now closed. Sessions held in theĀ Plenary RoomĀ will be available via live stream. Please follow the link below to register for online participation:
https://www.eventsforce.net/turingevents/ctb/BPTCGW

Speakers

Peggy Hicks

Peggy Hicks

Director of the Thematic and Special Procedures Division, Office of the United Nations High Commissioner for Human Rights (OHCHR)
PeggyĀ Hicks hasĀ served as director of the ThematicĀ and Special Procedures Division at the UN's Human Rights Office since January 2016. From 2005 to 2015, she was global advocacy director at HumanĀ Rights Watch.
View bio
Luis Aranda

Luis Aranda

Senior Economist - Artificial Intelligence, OECD
Luis Aranda is a Senior Economist in Artificial Intelligence at the OECD, which he joined in 2017. In this role, Luis has contributed to the scoping of the OECD AI Principles and the creation of the OECD.AI Policy Observatory and its network of experts.
View bio
Gilles Thonet

Gilles Thonet

Deputy Secretary-General, International Electrotechnical Commission (IEC)
Gilles Thonet is the Deputy Secretary-General of the International Electrotechnical Commission (IEC) as well as the Secretary of the IEC Standardization Management Board (SMB).
View bio
 Ultan  Mulligan

Ultan Mulligan

Chief Services Officer, European Telecommunications Standards Institute (ETSI)
Mr. Ultan Mulligan is Chief Services Officer at ETSI, responsible for ETSI’s software and Open Source groups, testing and interoperability, preparation and publication of standards, and new tools and working methods.
View bio
Sebastian Hallensleben

Dr Sebastian Hallensleben

Chair of CEN-CENELEC JTC 21 ā€œArtificial Intelligenceā€, CEN-CENELEC
DrĀ SebastianĀ Hallensleben is the Chair of CEN-CENELEC JTC 21 where EuropeanĀ AI standards to underpin EU regulation are being developed,Ā and co-chairs the AI riskĀ and accountability work at OECD.Ā 
View bio
Anneke  Auer-Olvera

Anneke Auer-Olvera

Director, Programs, AI and Data Governance, Standards Council of Canada (SCC)
Anneke Auer-Olvera is Director of Programs at the Standards Council of Canada (SCC), where she leads national initiatives in artificial intelligence, data governance, infrastructure, and climate resilience.
View bio
Barbara  Glover

Barbara Glover

Programme Officer, African Union (AUDA-NEPAD)
Barbara Glover is a dedicated advocate for science, technology, and innovation in Africa. With a background in innovation and emerging technologies and extensive experience in research and policy.
View bio
Sarvapali (Gopal) Ramchurn

Sarvapali (Gopal) Ramchurn

Responsible AI UK (RAI UK) CEO, Professor of Artificial Intelligence, University of Southampton
Sarvapali (Gopal) Ramchurn FIET is Professor of Artificial Intelligence and CEO of the £31M Responsible Ai UK programme. He was also Director of the UKRI Trustworthy Autonomous Systems (TAS) Hub, sitting at the centre of the £33M Trustworthy Autonomous Systems Programme.
View bio
Catherine  RƩgis

Prof Catherine RƩgis

Co-director, Canadian Artificial Intelligence Safety Institute, Research Programme
Catherine RƩgis is a Full professor at the Faculty of Law of UniversitƩ de MontrƩal and holds a Canada CIFAR Chair in AI and Human Rights as well as a Chair on Science Diplomacy and Global Governance of AI (FRQ).
View bio
Matt  Gantley

Matt Gantley

Chief Executive Officer, United Kingdom Accreditation Service (UKAS)
Matt Gantley is the CEO of UKAS, a leading global Accreditation Body and major contributor to the UK and global quality infrastructure. With over 25 years of experience in conformity assessment, Matt has a distinguished career in the TIC Sector.
View bio
Eva Ignatuschtschenko

Eva Ignatuschtschenko

Director of Technology Insight, Competition and Markets Authority
EvaĀ IgnatuschtschenkoĀ is the CMA’s Director of Technology Insight, helping the CMA understand the technologies transforming digital markets, from AI to complex tech ecosystems.
View bio
Adam Leon  Smith

Adam Leon Smith

Chair, AIQI Consortium
Adam Leon Smith is an AI regulatory and technical expert specialising in EU AI Act compliance and risk management. He is Chair of AIQI Consortium, a global initiative to promote the use of the quality infrastructure for responsible AI, and Deputy Chair of the UK’s national AI standards committee.
View bio
Patricia  Shaw

Patricia Shaw

CEO, Global AI Governance and Standards Advisor, Lawyer, Beyond Reach Consulting Limited
Patricia Shaw is CEO of Beyond Reach Consulting Limited, AI strategic advisor, working globally at the intersection of human rights, law, policy, governance, technology and ethics.
View bio
Sahar  Danesh

Sahar Danesh

Senior Government Engagement Manager and Digital Policy Lead, British Standards Institution (BSI)
Sahar Danesh is Senior Government Engagement Manager and Digital Policy Lead at BSI. Sahar is a founding partner of the AI Standards Hub and supports government engagement across international standards development organisations (SDOs), including ETSI and ITU.
View bio
Harry  Hothi

Harry Hothi

Healthcare Sector Lead, British Standards Institution (BSI)
Harry Hothi is the Healthcare Sector Lead at BSI. He drives the growth and strategic direction of the healthcare portfolio, shaping how BSI supports innovation, safety, and compliance across the medical technology sector, with a particular focus on digital health, AI and medical devices.
View bio
Radu  Calinescu

Radu Calinescu

Professor of Computer Science, University of York
Radu Calinescu is Professor of Computer Science at the University of York, UK. His interdisciplinary research on these topics has been supported by grants totalling over Ā£25M from funders including UKRI, Lloyd’s Register Foundation, ARIA, Dstl, UK Atomic Energy Authority, and Microsoft Research.
View bio
Simon Burton

Simon Burton

Chair of Systems Safety, University of York
Professor Simon Burton, PhD, holds the Chair of Systems Safety at the University of York, UK and is also Business Director of the Centre for Assuring Autonomy.
View bio
Simone  Stumpf

Simone Stumpf

Professor of Responsible and Interactive AI, University of Glasgow
Simone Stumpf is Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with AI systems.
View bio
Florian Ostmann

Florian Ostmann

Distinguished Policy Fellow, London School of Economics and Political Science
DrĀ FlorianĀ Ostmann is a Distinguished Policy Fellow in the Data Science Institute at the London School of EconomicĀ and Political Science. Before joining LSE,Ā FlorianĀ was Director of AI Governance and Regulatory Innovation at The AlanĀ Turing Institute, leading the work on the AIĀ Standards Hub.
View bio
Avtar  Benning

Avtar Benning

Director, AI Assurance, Deloitte
Avtar Benning has 15 years of experience across consulting and financial services in various quantitative roles specialising in Data, Analytics and AI.
View bio
Thomas Doms

Thomas Doms

Global Product Lead AI Services - Managing Director, TRUSTFAI
Thomas Doms, after studying economics, worked for more than 20 years in various management positions in international companies such as Raab Karcher, TÜV Rheinland and T-Systems in the areas of business development, innovation management, IT strategy and information security.
View bio
Michaela  Coetsee

Michaela Coetsee

AI Ethics and Assurance Lead, Advai
Michaela Coetsee specialises in AI ethics and AI assurance at Advai, a leading third party AI testing and assurance company. She has a background in psychology and data science bringing a critical sociotechnical perspective to the conversation.
View bio
James Gealy

James Gealy

Standardization Lead, Safer AI
James Gealy is Standardization Lead at SaferAI. He is co-project leader of EN AI Risk Management at CEN-CLC JTC 21, and editor of ISO/IEC TS 42119-8 addressing benchmarking and red-teaming of advanced LLMs.
View bio
Rania Wazir

Rania Wazir

Co-founder and CTO, Leiwand.ai
Rania Wazir, Ph.D., is co-founder and CTO of leiwand.ai – a startup dedicated to supporting companies and organizations with the development and deployment of trustworthy AI. Rania is a data scientist and mathematician.
View bio
Jacqui  Taylor

Dr Jacqui Taylor

CEO, Co-Founder, FlyingBinary
Dr Jacqui Taylor is CEO of two AI companies, shortlisted for European PICASSO award based on an AI ICO safety collaboration. Recognised as the #15 Most Influential Woman in UK Technology and one of the 21 most Inspiring Women in Cyber.Ā 
View bio
Ana Alania

Ana Alania

AI Programme Manager, National Physical Laboratory (NPL)
AnaĀ Alania isĀ anĀ AI Programme Manager at the National Physical Laboratory (NPL), where sheĀ is responsible forĀ the strategic developmentĀ and delivery of NPL's programmes in AI.
View bio
Chris Barnes

Chris Barnes

Head of Science of AI, National Physical Laboratory (NPL)
Chris Barnes is Head of Science of AI at the National Physical Laboratory (NPL), where he leads strategic research into AI uncertainty quantification, trustworthy AI, and metrics for data quality.
View bio
Paul Duncan

Paul Duncan

Principal Scientist, National Physical Laboratory (NPL)
DrĀ PaulĀ DuncanĀ is a Principal Scientist at the UK's National Physical Laboratory (NPL), where he leads the Informatics group within the Data Science & AI department.
View bio
Stacie Hoffmann

Stacie Hoffmann

Head of the Centre for AI Measurement, National Physical Laboratory (NPL)
Stacie is Head of the Centre for AI Measurement and Head of Strategic Growth and Department for Data Science and AI at the National Physical Laboratory. She leads strategic development of NPL’s capabilities to deliver confidence and trust in data and AI.
View bio
Yoko  Kaneko

Yoko Kaneko

Assured Autonomy Strategy and Business Development Manager, National Physical Laboratory (NPL)
Yoko Kaneko is a Strategy and Business Development Manager for the Assured Autonomy programme as a part of the resilience and security national challenge at NPL.
View bio
Arcangelo Leone De Castris

Arcangelo Leone De Castris

Research Manager, The Alan Turing Institute
Arcangelo Leone De Castris is a Research Manager in the Public Policy Programme and is currently leading the Turing Institute’s AI governance offerings for BridgeAI as well as supporting the development of the AI Standards Hub.
View bio
Moulham  Alsuleman

Moulham Alsuleman

Higher Scientist, National Physical Laboratory (NPL)
DrĀ MoulhamĀ AlsulemanĀ is a Higher Scientist at the National Physical Laboratory, specialising in trustworthy AI enabled systems for regulated domains. His work integratesĀ expertiseĀ in pharmaceutical science, semanticĀ technologiesĀ and AI governance,.
View bio
Louise  Axon-Jones

Louise Axon-Jones

Research Fellow, Global Cyber Security Capacity Centre, University of Oxford
Louise Axon-Jones is a Research Fellow in Cybersecurity at the University of Oxford. She is also involved with the Global Cyber Security Capacity Centre (GCSCC).
View bio
Joslyn Barnhart

Joslyn Barnhart

Frontier Standards and Governance Lead, Google DeepMind
View bio
David Bell

David Bell

Standards Policy Director, British Standards Institution (BSI)
View bio
Sundeep Bhandari

Sundeep Bhandari

Chief Digital Innovation Officer, National Physical Laboratory (NPL)
Sundeep (Sunny) works at the National Physical Laboratory (NPL), which is the UK's National Measurement Institute responsible for leading the country's measurement strategy and implementation.
View bio
Laura  Bishop

Laura Bishop

AI and Cyber Security Sector Lead, British Standards Institution (BSI)
Dr Laura Bishop is a Human Factors Cyberpsychologist. With a PhD in human vulnerabilities to cyber-attacks and research in both human-robot and human-computer interaction, Laura is highly experienced on the psychological and societal impacts of technology.
View bio
Chris Boyland

Chris Boyland

Head of AI and Digital Growth, Scottish Government
Chris Boyland is the Scottish Government’s Head of AI & Digital Growth, with responsibility for delivering the Government’s commitment toĀ establishĀ AI Scotland, a new national transformation programme which will harness the power of AI to drive net-positive economic growth.
View bio
David Cuckow

David Cuckow

Director of Digital, Knowledge Solutions, Sectors & Standards Development, British Standards Institution (BSI)
DavidĀ CuckowĀ is Director, Digital Sector and Standards at BSI, leading work across the full digital landscape, including AI, cyber security, privacy, cloud, and digital and data infrastructure. A technologist with over 30 years of international experience.
View bio
Suzi Daley

Suzi Daley

External Affairs Manager, United Kingdom Accreditation Service (UKAS)
View bio
Sue Daley OBE

Sue Daley OBE

Director Tech and Innovation, techUK
Sue leads techUK's Technology and Innovation work. In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List.
View bio
Gillian  Docherty

Gillian Docherty

Chief Commercial Officer, University of Strathclyde
Gillian Docherty is Chief Commercial Officer at the University of Strathclyde. Gillian has responsibility for Innovation & Industry Engagement, Research & Knowledge Exchange Services, Campus Services, Information Services and Marketing & Development.
View bio
Duncan  Duffy

Duncan Duffy

Head of Technology, Lloyds Register
Duncan Duffy is a Chartered Electronics and Electrical Engineer with 35 years’ experience of the safe and successful integration of maritime systems.Ā 
View bio
Sean  Duncan

Sean Duncan

Clinical Research Fellow, Digital Health Validation Lab (DHVL)
Dr Sean Duncan is a medical doctor and senior Clinical Innovation Fellow at the Digital Health Validation Lab, University of Glasgow.
View bio
Jesse  Dunietz

Jesse Dunietz

Computer Scientist , National Institute of Standards and Technology (NIST, USA)
JesseĀ DunietzĀ is a computer scientist NIST, where he leads the Information Technology Laboratory’s international engagements on AI, technicalĀ assistanceĀ on AI policy, and AI standards work.
View bio
Tim Engelhardt

Tim Engelhardt

Human Rights Officer, Office of the United Nations High Commissioner for Human Rights (OHCHR)
View bio
Joe Fulwood

Joe Fulwood

Global Engagement and Communications Officer, Global Cyber Security Capacity Centre, University of Oxford
Joe Fulwood is a Global Engagement and Communications Officer at the GCSCC, where he oversees stakeholder relationships, contributes to research projects, and manages events and communications.
View bio
Richard  Goodwin

Richard Goodwin

Global Head Predictive Data & AI, Executive Director, AstraZeneca
Richard Goodwin leads a world‑class team of bioinformaticians, data engineers, and AI scientists across Barcelona, Cambridge, and Bangalore. Aaccelerating the use of AI and data to advance the AstraZeneca portfolio.
View bio
Shivani Gupta

Shivani Gupta

Senior Policy Advisor, Confederation of British Industry (CBI)
Shivani Gupta is a Senior Policy Advisor in the Technology and Innovation team at the CBI, where she leads CBI’s policy work on artificial intelligence and the digital economy.
View bio
Issy Hall

Issy Hall

Policy lead - Emerging technology cyber standards, Department for Science, Innovation and Technology
View bio
Wan Sie Lee

Wan Sie Lee

Cluster Director for AI Governance and Safety, Infocomm Media Development Authority
Lee Wan Sie is Cluster Director for AI Governance and Safety at Singapore’sĀ InfocommĀ Media Development Authority. She is also the Head for Policy for Singapore’s AI Safety Institute (AISI).
View bio
Darren Lewis

Darren Lewis

Senior Innovation Lead - Technical and Commercial, Plexal
Darren Lewis is a Senior Innovation Lead atĀ Plexal, the innovation and growth company. He brings over 25 years of experience in telecoms and innovation, underpinned by a background in Computer Science, Artificial Intelligence, and Psychology, and an MSc in Telecommunications Engineering.
View bio
Maria Liakata

Maria Liakata

Professor of Natural Language Processing, Queen Mary University of London
She holds a UKRI/EPSRC Turing AI fellowship (2019-2025) on creating time sensitive sensors from user-generated language and heterogeneous content.
View bio
Stephen McArthur

Prof. Stephen McArthur

Principal and Vice-Chancellor, University of Strathclyde
View bio
Margarete McGrath

Margarete McGrath

Advisory & Strategy Partner Lead, Dell
View bio
Paul Miller

Paul Miller

Director Momentum One Zero, Centre for Secure Information Technologies, Queen’s University Belfast
View bio
Chris Nathan

Chris Nathan

Policy Fellow, The Alan Turing Institute
Dr Christopher Nathan is a Policy Fellow at the Alan Turing Institute.
View bio
Michel  Oliveira de Souza

Michel Oliveira de Souza

Human Rights Officer, Office of the United Nations High Commissioner for Human Rights (OHCHR)
Michel Roberto Oliveira de Souza is a Human Rights Officer at the UN Human Rights Office (OHCHR).
View bio
Michael  Orgill

Michael Orgill

CAV Project Engineer, HORIBA MIRA Ltd.
Michael Orgill is an expert in the testing, verification, and validation of Connected and Automated Vehicles (CAV). Michael has also worked with OEMs and manufacturers across the civilian automotive sector to enhance the safety of Level 0-2 Advanced Driver Assistance Systems (ADAS).
View bio
Enrico  Panai

Enrico Panai

AI Ethics, President, Association of AI Ethicists
Enrico Panai is an AI ethicist and the president of the Association of AI Ethicists. He teaches at the Catholic University of the Sacred Heart in Italy. He holds a philosophy degree and a PhD in AI Ethics andĀ CybergeographyĀ from the University of Sassari, where he taught Digital Humanities.
View bio
Cindy Parokkil

Cindy Parokkil

AI Policy Lead, International Organization for Standardization (ISO)
View bio
Jacob Pratt

Jacob Pratt

Policy Impact and Europe Lead, Partnership on AI
Jacob is the Public Affairs and Impact Lead in the Policy team at Partnership on AI. He collaborates with other researchers and industry colleagues on defining AI risks, assurance,Ā impactsĀ onĀ laborĀ and the economy and other topics.
View bio
Sara Rendtorff-Smith

Sara Rendtorff-Smith

Head of the Division for AI and Emerging Digital Technologies, OECD
Sara Rendtorff-Smith heads the Division on AI and Emerging Digital Technologies at the OECD, leading the OECD’s AI governance work and driving strategic policy development on responsible AI and emerging digital technologies such as quantum and immersive digital technologies.
View bio
Vasileios Rovilos

Vasileios Rovilos

EU Policy Director, Credo AI
View bio
Adam Sobey

Adam Sobey

Mission Director, The Alan Turing Institute
View bio
Evdoxia  Taka

Evdoxia Taka

Research Associate, University of Glasgow
Dr Evdoxia Taka is a Research Associate in the School of Computing Science, University of Glasgow. She received her PhD in Computing Science from the same University with the topic "Interactive Animated Visualizations of Probabilistic Models".
View bio
Clarisse de  Vries

Clarisse de Vries

Lecturer in Data Science, University of Glasgow
Dr Clarisse de Vries is a Lecturer in Data Science at the University of Glasgow, specialising in the development and evaluation of Artificial Intelligence (AI) for breast cancer screening.
View bio

 

Agenda

Day One Schedule

9:00 am - 9:30 am

Summary

Civic Welcome from the Lord Provost’s Office followed by welcome to the Summit from the AI Standards Hub partners.

Plenary Room
Speakers
Sundeep Bhandari
Sundeep Bhandari
Chief Digital Innovation Officer, National Physical Laboratory (NPL)
View bio
Suzi Daley
Suzi Daley
External Affairs Manager, United Kingdom Accreditation Service (UKAS)
View bio
Chris Nathan
Chris Nathan
Policy Fellow, The Alan Turing Institute
View bio
David Cuckow
David Cuckow
Director of Digital, Knowledge Solutions, Sectors & Standards Development, British Standards Institution (BSI)
View bio
See more details

9:30 am - 9:45 am

Summary
Plenary Room
Speaker
Peggy Hicks
Peggy Hicks
Director of the Thematic and Special Procedures Division, Office of the United Nations High Commissioner for Human Rights (OHCHR)
View bio
See more details

9:45 am - 10:35 am

Summary

Anchored in the Seoul Statement on Artificial Intelligence, this panel explores how Standards Development Organisations (SDOs) are turning shared global principles into coordinated, practical action for responsible AI. Adopted at the International AI Standards Summit in Seoul in December 2025, the Statement reaffirmed a collective commitment to developing human‑centred, safe, inclusive, and effective international AI standards. This session shines a spotlight on the people across global, regional, and national SDOs who are delivering on those commitments, particularly around inclusion and practical deployment.

Plenary Room
Speakers
Gilles Thonet
Gilles Thonet
Deputy Secretary-General, International Electrotechnical Commission (IEC)
View bio
 Ultan  Mulligan
Ultan Mulligan
Chief Services Officer, European Telecommunications Standards Institute (ETSI)
View bio
Cindy Parokkil
Cindy Parokkil
AI Policy Lead, International Organization for Standardization (ISO)
View bio
David Cuckow
David Cuckow
Director of Digital, Knowledge Solutions, Sectors & Standards Development, British Standards Institution (BSI)
View bio
Gillian  Docherty
Gillian Docherty
Chief Commercial Officer, University of Strathclyde
View bio
See more details

10:35 am - 10:55 am

Networking Break

10:55 am - 11:05 am

Summary

Introduction to the Scottish ecosystem followed by a Keynote presentation from Chris Boyland, Scottish Government’s Head of AI and Digital Growth discussing the AI capabilities and opportunities for growth in Scotland.

Plenary Room
Speaker
Chris Boyland
Chris Boyland
Head of AI and Digital Growth, Scottish Government
View bio
See more details

11:05 am - 11:20 am

Summary

With the development of harmonised standards for the EU AI Act entering the finishing stretch, the focus of AI standardisation is broadening from trustworthiness and risk mitigation towards quality, including performance. This has the potential to boost market transparency and thus competition as a driver of innovation but also presents the challenge of standardising meaningful metrics across a plethora of domains and use cases. This talk outlines the transition to AI quality as the next frontier, the current state of play, and the gradual reflection in the standardisation landscape.

Plenary Room
Speaker
Sebastian Hallensleben
Dr Sebastian Hallensleben
Chair of CEN-CENELEC JTC 21 ā€œArtificial Intelligenceā€, CEN-CENELEC
View bio
See more details

11:20 am - 12:10 pm

Summary

AI evaluation is highly context specific. Risks and performance requirements vary across sectors, applications and deployment environments. This makes national approaches to testing and assurance, and their international alignment, particularly important.

This panel will examine how different countries are developing frameworks for testing and assuring AI systems. It will look at national strategies, regulatory requirements and the technical methods used to support safe and trustworthy AI.

Panellists will outline their countries’ approaches, highlighting early lessons, emerging good practice and the challenges of aligning standards across borders. The session will also consider opportunities for international cooperation to improve interoperability and support responsible innovation.

Each panellist will give a five-minute overview of their national TEVV approach, followed by a moderated discussion.

Plenary Room
Speakers
Paul Duncan
Paul Duncan
Principal Scientist, National Physical Laboratory (NPL)
View bio
Stacie Hoffmann
Stacie Hoffmann
Head of the Centre for AI Measurement, National Physical Laboratory (NPL)
View bio
Anneke  Auer-Olvera
Anneke Auer-Olvera
Director, Programs, AI and Data Governance, Standards Council of Canada (SCC)
View bio
Wan Sie Lee
Wan Sie Lee
Cluster Director for AI Governance and Safety, Infocomm Media Development Authority
View bio
Jesse  Dunietz
Jesse Dunietz
Computer Scientist , National Institute of Standards and Technology (NIST, USA)
View bio
See more details

12:10 pm - 1:10 pm

Lunch

1:10 pm - 2:55 pm

Summary

This interactive session examines how AI assurance is being implemented in practice today, where gaps persist between expectations and reality, and what is needed to build a credible, effective, and globally aligned AI assurance ecosystem.

It will bring together perspectives from across the AI assurance ecosystem, including third-party assurance providers, AI developers, and deploying organisations, to explore:

  • What credible AI assurance looks like in practice today
  • Where existing standards and frameworks are helping and where gaps remain
  • What capabilities and mechanisms are most urgently needed to strengthen trust and comparability in AI assurance, and how these can be enabled by technical testing.

This session is designed as an evidence-gathering and sense-making exercise, combining expert insights with structured audience input to surface common challenges, gaps, and priorities across sectors.

Plenary Room
Speakers
Chris Barnes
Chris Barnes
Head of Science of AI, National Physical Laboratory (NPL)
View bio
Ana Alania
Ana Alania
AI Programme Manager, National Physical Laboratory (NPL)
View bio
Sebastian Hallensleben
Dr Sebastian Hallensleben
Chair of CEN-CENELEC JTC 21 ā€œArtificial Intelligenceā€, CEN-CENELEC
View bio
Jacob Pratt
Jacob Pratt
Policy Impact and Europe Lead, Partnership on AI
View bio
Avtar  Benning
Avtar Benning
Director, AI Assurance, Deloitte
View bio
Michaela  Coetsee
Michaela Coetsee
AI Ethics and Assurance Lead, Advai
View bio
Richard  Goodwin
Richard Goodwin
Global Head Predictive Data & AI, Executive Director, AstraZeneca
View bio
See more details

1:10 pm - 2:55 pm

Summary

Traditional AI systems typically generate outputs that humans then interpret and act on, so technical evaluation—accuracy, bias, robustness—captures much of what matters. But as AI systems become more autonomous, embedded, and influential, evaluation must expand beyond technical performance. It needs to account for the people who deploy these systems, the communities they affect, the institutional and regulatory contexts they operate within, and the assumptions embedded in data, model design, and implementation choices.

Taking this socio-technical approach calls for collaboration across disciplines and meaningful engagement with end-users and other key stakeholders, so that emerging metrics, standards, and regulation reflect real-world conditions and consequences.

Delivered by Responsible Ai UK, this workshop will bring together experts in AI and applied AI from across the UK to examine evaluation challenges across multiple sectors, including law enforcement, healthcare, and cultural heritage. Leads from major projects will present scenarios, research questions, and early findings from their work. Interactive roundtables will then involve participants in identifying shared concerns and practical priorities, helping to shape recommendations for policymakers, researchers, and industry.

Carron Room
Speakers
Sarvapali (Gopal) Ramchurn
Sarvapali (Gopal) Ramchurn
Responsible AI UK (RAI UK) CEO, Professor of Artificial Intelligence, University of Southampton
View bio
Simone  Stumpf
Simone Stumpf
Professor of Responsible and Interactive AI, University of Glasgow
View bio
Evdoxia  Taka
Evdoxia Taka
Research Associate, University of Glasgow
View bio
Radu  Calinescu
Radu Calinescu
Professor of Computer Science, University of York
View bio
Maria Liakata
Maria Liakata
Professor of Natural Language Processing, Queen Mary University of London
View bio
See more details

1:10 pm - 2:00 pm

Summary

AI and AI-powered technologies are now embedded across everyday life and work, from monitoring consumer sleep patterns to shaping employee decision making and organisational strategy. Yet much of today’s human–AI collaboration happens without sufficient awareness, skills, or mechanisms to question, contest, or meaningfully engage with AI outputs. This interactive session explores how AI standards and standards literacy can play a critical role in building human capacity to collaborate with AI systems responsibly and effectively. Grounded in the vision of Industry 5.0, the session brings together diverse perspectives to examine how skills development, education, and standards can empower humans not just to use AI, but to understand, govern, and coexist with it in ways that are fair, sustainable, human-centric, and resilient.

Dochart Room
Speakers
Laura  Bishop
Laura Bishop
AI and Cyber Security Sector Lead, British Standards Institution (BSI)
View bio
Shivani Gupta
Shivani Gupta
Senior Policy Advisor, Confederation of British Industry (CBI)
View bio
Stephen McArthur
Prof. Stephen McArthur
Principal and Vice-Chancellor, University of Strathclyde
View bio
Margarete McGrath
Margarete McGrath
Advisory & Strategy Partner Lead, Dell
View bio
See more details

2:05 pm - 2:55 pm

Summary

The rapid deployment of artificial intelligence across public and private sectors is reshaping how decisions are made about people’s lives, opportunities, and access to services. While AI systems offer significant potential for innovation and efficiency, they also raise serious concerns about the amplification of existing inequalities, the entrenchment of systemic bias, and the erosion of fundamental human rights.

This interactive session explores the intersection of human rights and artificial intelligence, examining how AI technologies can affect individual rights such as equality, non-discrimination, dignity, privacy, and access to remedy. The session will critically assess whether existing legal, regulatory, and standards frameworks are sufficient to protect these rights in the context of increasingly advanced AI systems.

Central to the discussion is the Seoul statement, agreed at the AI Seoul Summit in May 2024, which positions AI safety, innovation, and inclusivity as interdependent goals and calls for enhanced international cooperation to promote human-centric AI and human rights. Drawing on diverse perspectives from ethics, industry, philosophy, civil society, and standards development, the session will explore how the principles of the Seoul Declaration can be operationalised through standards, governance mechanisms, and international collaboration.

Dochart Room
Speakers
Sahar  Danesh
Sahar Danesh
Senior Government Engagement Manager and Digital Policy Lead, British Standards Institution (BSI)
View bio
Patricia  Shaw
Patricia Shaw
CEO, Global AI Governance and Standards Advisor, Lawyer, Beyond Reach Consulting Limited
View bio
Rania Wazir
Rania Wazir
Co-founder and CTO, Leiwand.ai
View bio
Enrico  Panai
Enrico Panai
AI Ethics, President, Association of AI Ethicists
View bio
Tim Engelhardt
Tim Engelhardt
Human Rights Officer, Office of the United Nations High Commissioner for Human Rights (OHCHR)
View bio
See more details

2:55 pm - 3:15 pm

Networking break

3:15 pm - 3:25 pm

Summary
Plenary Room
Speaker
Sara Rendtorff-Smith
Sara Rendtorff-Smith
Head of the Division for AI and Emerging Digital Technologies, OECD
View bio
See more details

3:25 pm - 4:15 pm

Summary

A look at the evolving landscape of AI regulation around the world not only reveals significant differences in the approaches taken in different jurisdictions but also brings to light important needs and objectives that AI regulation cannot or may not be best placed to address. This panel will take a holistic look at the AI governance toolbox and explore the relationship between AI regulation, standards, and other AI governance tools. Featuring perspectives from different regions of the world, the discussion will examine relevant developments in AI regulation, the ā€˜limits’ of AI regulation, and the role of other governance tools in relation to these limits.Ā 

Plenary Room
Speakers
Florian Ostmann
Florian Ostmann
Distinguished Policy Fellow, London School of Economics and Political Science
View bio
Luis Aranda
Luis Aranda
Senior Economist - Artificial Intelligence, OECD
View bio
Barbara  Glover
Barbara Glover
Programme Officer, African Union (AUDA-NEPAD)
View bio
Eva Ignatuschtschenko
Eva Ignatuschtschenko
Director of Technology Insight, Competition and Markets Authority
View bio
Jacob Pratt
Jacob Pratt
Policy Impact and Europe Lead, Partnership on AI
View bio
See more details

4:15 pm - 5:05 pm

Summary

This session will explore the invisible global framework of standards, conformity assessment, metrology, accreditation and market surveillance which underpins the quality and safety of products and services across the economy. It will examine how these elements are crucial to the development of a safe and secure AI assurance framework and look specifically at how the collaborative work of the AIQI Consortium is using the global quality ecosystem to develop a secure, safe and ethical AI governance framework.

Plenary Room
Speakers
Adam Leon  Smith
Adam Leon Smith
Chair, AIQI Consortium
View bio
Matt  Gantley
Matt Gantley
Chief Executive Officer, United Kingdom Accreditation Service (UKAS)
View bio
Thomas Doms
Thomas Doms
Global Product Lead AI Services - Managing Director, TRUSTFAI
View bio
Jacqui  Taylor
Dr Jacqui Taylor
CEO, Co-Founder, FlyingBinary
View bio
Rania Wazir
Rania Wazir
Co-founder and CTO, Leiwand.ai
View bio
See more details

5:05 pm - 5:15 pm

See more details

5:15 pm - 7:15 pm

Summary

An evening celebration with entertainment, drinks and street food

See more details

Day Two Schedule

9:00 am - 9:10 am

See more details

9:10 am - 9:25 am

Summary
Plenary Room
Speaker
Catherine  RƩgis
Prof Catherine RƩgis
Co-director, Canadian Artificial Intelligence Safety Institute, Research Programme
View bio
See more details

9:25 am - 10:15 am

Summary

This panel brings together leading voices in AI governance and standardisation to unpack the evolving landscape of AI risk management. Panellists will explore emerging standards and frameworks for identifying, assessing, and mitigating AI‑related risks, sharing practical insights from real‑world implementation in various deployment contexts. The discussion will also examine common challenges faced by organisations and highlight strategies for building and sustaining effective governance processes.

Plenary Room
Speakers
Arcangelo Leone De Castris
Arcangelo Leone De Castris
Research Manager, The Alan Turing Institute
View bio
James Gealy
James Gealy
Standardization Lead, Safer AI
View bio
Sue Daley OBE
Sue Daley OBE
Director Tech and Innovation, techUK
View bio
Joslyn Barnhart
Joslyn Barnhart
Frontier Standards and Governance Lead, Google DeepMind
View bio
Vasileios Rovilos
Vasileios Rovilos
EU Policy Director, Credo AI
View bio
See more details

10:15 am - 10:30 am

Networking Break

10:30 am - 11:30 am

Summary

Artificial intelligence is increasingly embedded across healthcare systems, from diagnostics and clinical decision support to operational optimisation and patient engagement. While AI offers the potential to improve outcomes, efficiency, and access to care, its adoption in healthcare presents unique challenges related to safety, effectiveness, accountability, trust, and regulation. This sectoral focus panel will examine the practical application of AI standards and assurance mechanisms in healthcare, addressing how standards can support safe deployment, regulatory compliance, and real-world clinical impact. The session will bring together perspectives from clinical practice, health innovation leadership, validation and assurance, and AI development to explore how standards can bridge the gap between innovation and trustworthy use in healthcare settings.

Plenary Room
Speakers
Harry  Hothi
Harry Hothi
Healthcare Sector Lead, British Standards Institution (BSI)
View bio
Sean  Duncan
Sean Duncan
Clinical Research Fellow, Digital Health Validation Lab (DHVL)
View bio
Clarisse de  Vries
Clarisse de Vries
Lecturer in Data Science, University of Glasgow
View bio
Moulham  Alsuleman
Moulham Alsuleman
Higher Scientist, National Physical Laboratory (NPL)
View bio
See more details

10:30 am - 11:30 am

Summary

Autonomous systems are transforming how we move, but each transport domain has developed distinct safety cultures, regulatory frameworks, and approaches to verification. This panel brings together experts from across the landscape to explore how standards can enable safe deployment while fostering innovation. We will examine how we assure systems that learn and adapt; what different sectors can learn from others’ approaches; and how we develop standards that are rigorous enough to ensure public safety yet flexible enough to accommodate rapidly evolving technology.

Carron Room
Speakers
Simon Burton
Simon Burton
Chair of Systems Safety, University of York
View bio
Chris Nathan
Chris Nathan
Policy Fellow, The Alan Turing Institute
View bio
Yoko  Kaneko
Yoko Kaneko
Assured Autonomy Strategy and Business Development Manager, National Physical Laboratory (NPL)
View bio
Duncan  Duffy
Duncan Duffy
Head of Technology, Lloyds Register
View bio
Michael  Orgill
Michael Orgill
CAV Project Engineer, HORIBA MIRA Ltd.
View bio
See more details

10:30 am - 11:30 am

Summary

Securing AI in UK Critical National Infrastructure: AI is creating major opportunities to improve the efficiency and resilience of UK Critical National Infrastructure (CNI), from predictive maintenance to decision support and planning. Yet AI-related threats are rapidly emerging, both from deploying AI into CNI environments and from attackers using AI to increase their scale and sophistication. Governance, protective controls, and standards are struggling to keep pace. This panel and interactive workshop will examine real-world CNI deployment challenges, deep-diving into key implementation challenges for the upcoming AI Security ETSI/DSIT standard to identify the most material obstacles and best practices to ensure adherence. The session by the Laboratory for AI Security Research (LASR) brings together voices from Industry providers and adopters, government, and academia to produce an actionable overview of key risks and the most important gaps to address for providing security in AI for CNI.Ā 

Dochart Room
Speakers
Darren Lewis
Darren Lewis
Senior Innovation Lead - Technical and Commercial, Plexal
View bio
Paul Miller
Paul Miller
Director Momentum One Zero, Centre for Secure Information Technologies, Queen’s University Belfast
View bio
Louise  Axon-Jones
Louise Axon-Jones
Research Fellow, Global Cyber Security Capacity Centre, University of Oxford
View bio
Joe Fulwood
Joe Fulwood
Global Engagement and Communications Officer, Global Cyber Security Capacity Centre, University of Oxford
View bio
Issy Hall
Issy Hall
Policy lead - Emerging technology cyber standards, Department for Science, Innovation and Technology
View bio
See more details

11:30 am - 12:00 pm

Summary

Moderated by Prof Catherine RĆ©gis, the closing panel will reflect on the Summit’s proceedings, discussing key takeaways and necessary actions based on our experiences over the past two days. Panellists will include CEO’s, directors and senior staff from the AI Standards Hub partner organisations, the National Physical Laboratory, the Alan Turing Institute, BSI and the United Kingdom Accreditation Service.

Plenary Room
Speakers
Catherine  RƩgis
Prof Catherine RƩgis
Co-director, Canadian Artificial Intelligence Safety Institute, Research Programme
View bio
Stacie Hoffmann
Stacie Hoffmann
Head of the Centre for AI Measurement, National Physical Laboratory (NPL)
View bio
Adam Sobey
Adam Sobey
Mission Director, The Alan Turing Institute
View bio
David Bell
David Bell
Standards Policy Director, British Standards Institution (BSI)
View bio
Matt  Gantley
Matt Gantley
Chief Executive Officer, United Kingdom Accreditation Service (UKAS)
View bio
See more details

12:00 pm - 1:00 pm

Lunch

1:15 pm - 6:00 pm

Summary

For more information on the programme, please visit the UK Digital Standards Summit event page.

Plenary Room
See more details

As part of the AI Standards Hub summit this year, we are pleased to run an exhibition for our delegates to find out more about our delivery partners and local research institutions.Ā  Please join us in the main hall on both days, where the following companies will be on hand to discuss their capabilities, AI programmes and opportunities for collaboration:


NPL logo The National Physical Laboratory (NPL) is the UK’s National Metrology Institute (NMI), developing and maintaining the national primary measurement standards, as well as collaborating with other NMIs to maintain the international system of measurement. As a public sector research establishment, we deliver extraordinary impact by providing the measurement capability that underpins the UK’s prosperity and quality of life. We develop the metrology required to ensure the timely and successful deployment of new technologies and work with organisations as they develop and test new products and processes.

BSI logo The BSI builds trust in digital transformation by convening stakeholders to agree priorities for action, then create and share consensus-based best practice. BSI is an independent, trusted partner to government and industry, and helps protect consumers from digital harms. BSI’s work addresses people and processes, as well as technology, to increase understanding, confidence and value for all. As the National Standards Body, BSI also represents the UK on the world stage, playing a leading role in the development and take-up of international standards. and add link

Alan Turing Institute logo The Alan Turing Institute is the UK’s national institute for data science and artificial intelligence. The Institute is named in honour of Alan Turing, whose pioneering work in theoretical and applied mathematics, engineering and computing is considered to have laid the foundations for modern-day data science and artificial intelligence. The Institute’s purpose is to make great leaps in data science and AI research to change the world for the better. Its goals are to advance world-class research and apply it to national and global challenges, build skills for the future by contributing to training people across sectors and career stages, and drive an informed public conversation by providing balanced and evidence-based views on data science and AI.

The United Kingdom Accreditation Service (UKAS) is the National Accreditation Body for the UK, as appointed by the UK Government. Its role is to assess that organisations providing conformity assessment services (certification, testing, inspection, calibration and verification) are meeting a required standard of performance. UKAS accreditation demonstrates an organisation’s competence, impartiality and performance capability against nationally and internationally recognised standards. For further information please visit www.ukas.com.

 

The goals within the AdSoLve project are two-fold: Firstly, to create an extensive evaluation framework (including benchmarks with suitable novel criteria, metrics and tasks as well as transparent reference-free evaluation) for assessing the limitations of LLMs in real world settings, particularly in medical and legal applications. To make sure the proposed evaluation framework covers the right criteria, the AdSoLve team is working closely with legal and medical professionals and other stakeholders who shape the development and adoption of LLM products and services. This involves both a programme of co-creation workshops and interviews with key informants to identify key requirements for these products and services as well as development of novel AI methodology for assessing the quality of generated content beyond traditional benchmarks and metrics. The second goal of the project is to devise novel mitigating solutions to address identified LLM limitations, based on new machine learning methodology and informed by expertise in law, ethics and healthcare, via co-creation with domain experts, that can be incorporated in products and services. The methodology includes development of modules for temporal reasoning and situational awareness in long-form text, dialogue and multi-modal data, as well as alignment with human-preferences, bias reduction and privacy preservation.

PHAWMArtificial intelligence (AI) applications have become ubiquitous in their impact on individuals and society, highlighting a crucial need for their responsible development. Recent research has called for participatory AI auditing, empowering individuals without AI expertise to audit AI applications throughout the entire AI development pipeline. The RAi UK funded Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, a collaboration of the Universities of Glasgow, Edinburgh, Sheffield, Stirling, Strathclyde, York and King’s College London, focuses on investigating how to support these kinds of auditors through participatory AI auditing tools and processes. We are pleased to demonstrate our work at the AI Standards Hub

The Laboratory for AI Security Research (LASR) is a collaboration between the public and private sectors in the UK to bring together the best minds in AI security. LASR is dedicated to mitigating security risks to and from artificial intelligence (AI) to strengthen national security and support economic growth. Launched in November 2024 at the Nato Cyber Defence Conference, the initiative brings together world-leading experts from UK organisations including Plexal, University of Oxford, The Alan Turing Institute, Queen’s University Belfast and the UK Government, alongside a broad network of academic, industry, and international partners. LASR conducts cutting-edge research at the intersection of AI and cyber security, develop novel capabilities and skills, accelerate research commercialisation, and foster international collaboration for the secure development and deployment of AI.

The Department for Science, Innovation and Technology (DSIT) leads the Government’s strategic engagement on digital technical standards. It works with the multistakeholder community to shape the development of global digital technical standards in the areas that matter most for upholding our democratic values, ensuring our cyber security, and advancing UK strategic interests through science and technology.

The National Manufacturing Institute Scotland (NMIS) is a trusted R&D partner helping manufacturers across the UK de-risk innovation, strengthen supply chains, and accelerate the adoption of advanced technologies.

Operated by the University of Strathclyde and part of the High Value Manufacturing (HVM) Catapult, NMIS supports businesses through expertise in digital & data driven manufacturing, materials & process engineering, advanced manufacturing, and circularity, including exploring technologies such as artificial intelligence and advanced data to improve productivity and efficiency.


The Data Lab is Scotland’s Innovation Centre for Data and AI
At The Data Lab, we believe in a thriving, connected society powered by data and AI – where cross-sector conversations, creativity, and collaboration ignite and sustain the sparks of ground-breaking ideas, products and societal advances. We’re uniquely positioned to support and incubate responsible innovation in data and AI across the public, private and academic sectors. We work with students, professionals, universities and colleges, businesses and our wider community to build a stronger and more collaborative economy and future for Scotland and the world.

Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society. By convening diverse, international stakeholders, we seek to pool collective wisdom to make change. We are not a trade group or advocacy organization. We develop tools, recommendations, and other resources by inviting voices from across the AI community and beyond to share insights that can be synthesized into actionable guidance. We then work to drive adoption in practice, inform public policy, and advance public understanding. Through dialogue, research, and education, PAI is addressing the most important and difficult questions concerning the future of AI.

Summit partners

Logo of OECD
Logo of the United Nations Human Rights Office of the High Commission