Deliverable 1: principles for the evaluation of artificial intelligence or machine learning-enabled medical devices to assure safety, effectiveness and ethicality
Overview
As part of the G7ās health track artificial intelligence (AI) governance workstream 2021, member states committed to the creation of 2 deliverables on the subject of governance:
-
1. The first paper seeks to define good practice for clinically evaluating artificial intelligence or machine learning (AI/ML) enabled medical devices.
2. The second focuses on how to assess the suitability of AI/ML-enabled medical devices developed in one G7 country for deployment in another G7 country.
These papers are complementary and should therefore be read in combination to gain a more complete picture of the G7ās stance on the governance of AI in health. This paper is the result of a concerted effort by G7 nations to contribute to the creation of harmonised principles for the evaluation of AI/ML-enabled medical devices, and the promotion of their effectiveness, performance, safety and ethicality. It builds on the efforts of existing international work led by the:
- Global Digital Health Partnership (GDHP)
- Institute of Electrical and Electronics Engineers (IEEE)
- International Medical Device Regulators Forum (IMDRF)
- International Organization for Standardization (ISO)
- International Telecommunication Union (ITU) World Health Organization (WHO)
- Organisation for Economic Co-operation and Development (OECD)
A total of 3 working group sessions were held to reach consensus on the content of this paper.
This content is available under the Open Government Licence v3.0
External Links
Key Information
Jurisdiction: International, UK
Date published: 30 Dec 2021