Securing Artificial Intelligence (SAI) – Traceability of AI Models
Last updated: 31 Oct 2024
Development Stage
Pre-draft
Draft
Published
Scope
The NWI will study the role of traceability in the challenge of Securing AI and explore issues related to sharing and re-using models across tasks and industries. The scope includes threats, and their associated remediations where applicable, to ownership rights of AI creators as well as to verification of models origin, integrity or purpose. Mitigations can be non-AI-Specific (Digital Right Management applicable to AI) and AI-specific techniques (e. g. watermarking) from prevention and detection phases. They can be both model-agnostic or model enhancement techniques. Threats and mitigations specific to the collaborative learning setting, implying multiple data and model owners, could be also explored. The NWI will align terminology with existing ETSI ISG SAI documents and studies, and reference/complement previously studied attacks and remediations (ETSI GR SAI 004, ETSI GR SAI 005). It will also gather industrial and academic feedback on traceability and ownership rights protection and model verification (including integrity of model metadata) in the context of AI. © Copyright 2024, ETSI