• Content Type

ETSI TR 104032 V 1.1.1

Securing Artificial Intelligence (SAI) – Traceability of AI Models

Last updated: 31 Oct 2024

Development Stage

Pre-draft

Draft

Published

20 Feb 2024
published

Scope

The NWI will study the role of traceability in the challenge of Securing AI and explore issues related to sharing and re-using models across tasks and industries. The scope includes threats, and their associated remediations where applicable, to ownership rights of AI creators as well as to verification of models origin, integrity or purpose. Mitigations can be non-AI-Specific (Digital Right Management applicable to AI) and AI-specific techniques (e. g. watermarking) from prevention and detection phases. They can be both model-agnostic or model enhancement techniques. Threats and mitigations specific to the collaborative learning setting, implying multiple data and model owners, could be also explored. The NWI will align terminology with existing ETSI ISG SAI documents and studies, and reference/complement previously studied attacks and remediations (ETSI GR SAI 004, ETSI GR SAI 005). It will also gather industrial and academic feedback on traceability and ownership rights protection and model verification (including integrity of model metadata) in the context of AI. © Copyright 2024, ETSI

Let the community know

Categorisation

Domain: Horizontal

Key Information

Organisation: ETSI
Free to access: Yes

Discussion Forum

You must be logged in to contribute to the discussion

Login
\\n\\t\\t\\t\\t-->