Table 2 Summary of EU AI Act requirements for high-risk AI systems
From: Navigating the European Union Artificial Intelligence Act for Healthcare
Requirement | Summary |
---|---|
1. Risk management system (Art 9) | • Continuous, iterative process throughout the product lifecycle • Identify and analyse known and foreseeable risks (for intended use and misuse) • Adopt appropriate, targeted risk management measures • Tests to determine the most suitable measures, including the potential impact on minors and vulnerable groups |
2. Data and data governance (Art 10) | • Quality criteria for training, validation, and testing data sets for AI systems • Data governance practices focusing on design choices, collection processes, preparation operations, assumptions, bias detection/mitigation, and addressing data gaps • Data must be relevant, sufficiently representative, error-free, and tailored to the AI system’s intended purpose and target groups |
3. Technical documentation (Art 11) | • General description: intended purpose, provider, version, hardware/software interaction, software/firmware versions, market/service forms, intended hardware, product component visuals, user-interface description, deployer instructions • Development and design process: development methods, design specifications, system architecture, data requirements and handling, human oversight assessment, pre-determined changes and performance, validation and testing procedures, cybersecurity measures • Monitoring and control details: performance capabilities and limitations, unintended outcomes and risk sources, human oversight measures, input data specifications |
4. Record-keeping (Art 12) | • Automatic logging of events throughout the AI system’s lifecycle • Identification of situations that may result in the AI system presenting a risk or in a substantial modification, facilitation of post-market surveillance, operation monitoring |
5. Transparency and provision of information to deployers (Art 13) | • Provision of characteristics, capabilities, and limitations of performance to enable deployers to understand how the AI system works, evaluate its functionality, and comprehend its strengths and limitations • Provision of user instructions |
6. Human oversight (Art 14) | • Design and development of human–machine interface tools to enable effective human supervision to prevent or minimise risks • Empower users to understand the capabilities and limitations of the system, monitor its operation, be aware of and manage automation biases, interpret output accurately, make informed decisions about the use of the system, including disregarding or reversing its output, and safely intervene or stop the system when necessary |
7. Accuracy, robustness, and cybersecurity (Art 15) | • Benchmarking accuracy and ensuring robust lifecycle performance • Resilience to errors, faults, inconsistencies, unauthorised changes via technical/organisational measures, redundancy, backups, safeguards against feedback loops and cybersecurity threats • Defence against data/model poisoning, adversarial examples, confidentiality attacks, and model flaws |
8. Quality management system (Art 17) | • Appropriate to the size of the provider and sector, documentation through written policies, procedures, and instructions • Regulatory compliance strategies, design and development techniques, quality control and assurance processes, validation, verification, and testing procedures, application of technical specifications and standards, measures to ensure compliance with requirements not fully covered by harmonised standards |
9. Corrective actions and duty of information (Art 20) | • If the AI system placed on the market is not in conformity with the AI Act, corrective actions must be taken |
10. Authorised representatives (Art 22) | • For providers established in third countries, an authorised representative in the Union must be appointed, ensuring the provision of all necessary documentation and information and conformity with the AI Act |
11. Fundamental rights impact assessment (Art 27) | • Certain deployers must assess the system’s impact on fundamental rights, covering how and why the system will be used, usage frequency and duration, affected individuals or groups, specific risks of harm and measures for risk mitigation, human oversight implementation, response plans for risk materialisation |
12. Conformity assessment (Art 43) | • Compliance assessment of the quality management system, technical documentation, and post-market monitoring by the provider or a notified body |
13. EU declaration of conformity (Art 47) and CE marking of conformity (Art 48) | • To indicate the conformity of the product with all applicable requirements set out in the relevant EU harmonisation legislation |
14. EU database registration (Art 49) | • Providers of systems not covered by existing EU laws and those considering an Annex III system not high risk must register in a forthcoming Commission-managed EU database to increase transparency and facilitate Commission and Member State work |
15. Post-market monitoring (Art 72) | • Proportional to the risks; collection and analysis of data on AI system performance • Implementation of a monitoring plan within technical documentation, guided by a Commission template |
16. Reporting of serious incidents (Art 73) | • Reporting all serious incidents (i.e., serious harm to health, property, environment) to market surveillance authorities, investigating incidents, assessing risks, and implementing corrective actions |