From principles to practice: an interdisciplinary framework to operationalise AI ethics
Artificial intelligence (AI) increasingly pervades all areas of life. To seize the opportunities this technology offers society, while limiting its risks and ensuring citizen protection, different stakeholders have presented guidelines for AI ethics. Nearly all of them consider similar values to be crucial and a minimum requirement for “ethically sound” AI applications – including privacy, reliability and transparency. However, how organisations that develop and deploy AI systems should implement these precepts remains unclear. This lack of specific and verifiable principles endangers the effectiveness and enforceability of ethics guidelines. To bridge this gap, this paper proposes a framework specifically designed to bring ethical principles into actionable practice when designing, implementing and evaluating AI systems We have prepared this report as experts in spheres ranging from computer science, philosophy, and technology impact assessment via physics and engineering to social sciences, and we work together as the AI Ethics Impact Group (AIEI Group). Our paper offers concrete guidance to decision-makers in organisations developing and using AI on how to incorporate values into algorithmic decision-making, and how to measure the fulfilment of values using criteria, observables and indicators combined with a contextdependent risk assessment. It thus presents practical ways of monitoring ethically relevant system characteristics as a basis for policymakers, regulators, oversight bodies, watchdog organisations and standards development organisations. So this framework is for working towards better control, oversight and comparability of different AI systems, and also forms a basis for informed choices by citizens and consumers.
This content is available under a Creative Commons Attribution-ShareAlike 4.0 International licence.
Date published: 1 Apr 2020