Guidelines for secure AI system development
Overview
This document recommends guidelines for providers1 of any systems that use AI, whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
These guidelines should be considered in conjunction with established cyber security, risk management, and incident response best practice. In particular, we urge providers to follow the āsecure by designā2 principles developed by the US Cybersecurity and Infrastructure Security Agency (CISA), the UK National Cyber Security Centre (NCSC), and all our international partners. The principles prioritise:
- Ā taking ownership of security outcomes for customers
- embracing radical transparency and accountability
- building organisational structure and leadership so secure by design is a top business priority.
Following āsecure by designā principles requires significant resources throughout a systemās life cycle. It means developers must invest in prioritising features, mechanisms, and implementation of tools that protect customers at each layer of the system design, and across all stages of the development life cycle. Doing this will prevent costly redesigns later, as well as safeguarding customers and their data in the near term.
This content is available under the Open Government Licence V3
External Links
Key Information
Jurisdiction: UK - UK-wide
Date published: 27 Nov 2023
License: Crown Copyright 2023