Information technology – Artificial intelligence (AI) – Bias in AI systems and AI aided decision making
Last updated: 18 Jul 2024
Development Stage
Pre-draft
Draft
Published
5 Nov 2021
Scope
What is ISO/IEC TR 24027 about?
ISO/IEC TR 24027 addresses bias in relation to AI systems, especially with regard to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation, and use. Bias in AI systems is an active area of research. ISO/IEC TR 24027 articulates current best practices to detect and treat bias in AI systems or in AI-aided decision-making, regardless of source. ISO/IEC TR 24027 covers topics such as:
- An overview of bias and fairness
- Potential sources of unwanted bias and terms to specify the nature of potential bias
- Assessing bias and fairness through metrics
- Addressing unwanted bias through treatment strategies
Who is ISO/IEC TR 24027 for?
ISO/IEC TR 24027 on Artificial Intelligence (AI) bias is useful for:
- AI practitioners
- Machine learning engineers
- AI researchers
- Data scientists
Why should you use ISO/IEC TR 24027?
Bias in artificial intelligence (AI) systems can manifest in different ways. AI systems that learn patterns from data can potentially reflect existing societal bias against groups. While some bias is necessary to address the AI system objectives (i.e. desired bias), there can be a bias that is not intended in the objectives and thus represent unwanted bias in the AI system.
ISO/IEC TR 24027 in the development and deployment of AI systems present opportunities for the identification and treatment of unwanted bias to enable stakeholders to benefit from AI systems according to their objectives.
The stakes are high for business too, as misguided results from AI could lead to damaged reputations and costly failures.
ISO/IEC TR 24027 provides businesses to look to seek more representative training data during data understanding and use more inclusive labels during data preparation, experiment with causal inference and adversarial AI in the modeling phase, and account for intersectionality in the evaluation phase.
©ISO/IEC 2022. All rights reserved.