|
Welcome to our regular newsletter. Read on to find out what has been happening and what is coming up with the AI Standards Hub. |
| |
|
|
AI Standards Hub Global Summit: Official partners |
|
We are proud to be collaborating with five confirmed partner organisations in convening the AI Standards Hub Global Summit, taking place on 16 and 17 March in Glasgow and online. The OECD, the Office of the United Nations High Commissioner for Human Rights, Partnership on AI, The Data Lab and Responsible Ai UK are contributing to shaping key elements of the Summit and will be represented through senior speakers in the programme.
The event page includes the latest agenda, a list of our speakers and panelists, and a form to register for online attendance.
|
|
|
| |
|
|
AI Standards Hub Global Summit: Preliminary agenda and speakers |
|
The AI Standards Hub’s Global Summit is fast approaching and will feature an exciting lineup of experts from around the world, including keynote speeches by Peggy Hicks (Office of the United Nations High Commissioner for Human Rights), Sara Rendtorff-Smith (OECD), Prof Catherine Regis (Canadian AISI, Research Program) and Sebastian Hallensleben (CEN-CENELEC JTC 21).
Day 1 will focus on the current landscape of AI standardisation, measurement, and assurance, highlighting practical applications, emerging best practice, and the role of standards and measurement in enabling trustworthy and interoperable AI systems and the needs for AI assurance. Day 2 will build on these discussions through expert panels and interactive sessions, examining opportunities for deeper collaboration across governments, standards organisations, industry, and the research community. The programme will also explore the latest best practices for identifying and mitigating AI risks, with insights into emerging risk management frameworks, implementation challenges, and the role of standards in supporting trustworthy AI deployment.
|
|
|
| |
|
|
AI governance around the world: UK report released |
|
As countries worldwide strive to balance the trade-offs of AI regulation, the UK is betting on a distinct, sector-led approach. To better understand what that actually looks like on the ground, our latest country profile in the series 'AI governance around the world' offers a closer look at the UK's approach to AI regulation.
Key insights of the report include an analysis of the high-level aims and principles defining the UK’s AI strategy, a detailed look at key policy initiatives and legal instruments, an overview of the national standardisation system and its integration with broader governance goals.
|
|
|
| |
|
|
Creative Grey Zones: Copyright in the Age of Hybridity report released |
|
The rise of artificial intelligence (AI), especially since the generative AI boom, has catalysed a global re-examination of copyright law. As AI models increasingly ingest content and generate outputs based on the ingested content, the efficacy of legal frameworks that have traditionally governed authorship is being tested.
This report has been written against the backdrop of the UK’s copyright & AI consultation; a process that exemplifies the re-evaluation of the UK’s copyright regime in response to copyright and AI. Framing the discussion within the live policy process ensures that findings are timely and relevant for the ongoing decision-making.
|
|
|
| |
|
|
AI Governance for Business: Scoping AI Use Cases and Managing Risks |
|
Innovate UK BridgeAI has just released the final version of their AI Use Case Framework.
The Framework is an actionable tool that organisations can leverage to think strategically about how AI technologies can be operationalised in a business context, what risks and challenges should be addressed as part of this process, and what strategies can be implemented to do so.
|
|
|
| |
|
|
New Centre for AI Measurement announced by the National Physical Laboratory |
|
As high-capability AI systems become more integrated into the economy, the new Centre for AI Measurement will address the critical need for scientifically robust approaches for AI assurance, the process of identifying and mitigating risks before they impact businesses or the public. As the UK’s National Metrology Institute, NPL already plays a vital role in developing tools and approaches to increase confidence in AI. By underpinning AI assurance with metrology (the science of measurement), the new centre aims to give the UK a distinct competitive advantage, positioning the nation as the global home for AI testing, evaluation and security.
The Centre for AI Measurement will convene key national and international stakeholders including government, including DSIT and the AI Security Institute, regulators and industry. The centre will offer the collaborative environment required to undertake research while also providing expertise and support to empower industry, startups and researchers looking to develop and pilot technical assurance tools, with the aim of moving trustworthy AI assurance tools to market faster.
|
|
|
| |
|
Sign up for a user account |
|
Creating a user account on the AI Standards Hub website will enable you to take advantage of a wide range of interactive features. Benefits include the ability to save standards and other database items to your personal dashboard, notifications when followed standards open for comment or move from draft to publication, being able to contribute to our community discussion forums, access to our e-learning modules, and much more!
|
|
|
| |
|
Was this newsletter forwarded to you by someone else? Subscribe here to receive future editions directly in your inbox. |
| |
|