High-risk AI systemsÂ
"High-Risk" is the central regulatory category of the EU AI Act. AI systems in this tier are permitted on the European market but are subject to strict mandatory requirements before they can be deployed.
An AI system is classified as High-Risk if it poses a significant potential harm to the health, safety, or fundamental rights of natural persons. This classification is not based on the technology itself (e.g., "all deep learning") but on the use case, the specific context in which the AI is applied.
Classification Criteria (Article 6)
A system falls into the High-Risk category in two primary ways:
- Safety Components of Regulated Products (Annex I): The AI system is a safety component of a product (or is the product itself) that is already subject to third-party conformity assessments under existing EU health and safety harmonization legislation.
- Examples: AI used in medical devices, cars, machinery, toys, or lifts.
- Stand-Alone High-Risk Systems (Annex III): Specific AI systems listed in Annex III that are used in sensitive areas:
- Biometrics: Remote biometric identification (non-real-time), emotion recognition, and biometric categorization (that isn't prohibited).
- Critical Infrastructure: Safety components in the management of road traffic, water, gas, heating, or electricity supply.
- Education & Vocational Training: Systems determining access to education (e.g., grading exams, assigning students to schools).
- Employment: Tools used for recruitment (screening CVs), making promotion/termination decisions, or monitoring performance.
- Essential Private & Public Services: Credit scoring, evaluating eligibility for public benefits, or risk assessment in life/health insurance.
- Law Enforcement: Assessing the reliability of evidence, profiling individuals, or assessing the risk of crime (that isn't prohibited).
- Migration & Border Control: Polygraphs, assessing security risks of travelers, or examining asylum applications.
- Administration of Justice: Assisting judges in researching or interpreting facts and law.
Mandatory Compliance Requirements (Title III)
Providers of High-Risk AI systems must adhere to a rigorous compliance regime:
- Risk Management System: A continuous iterative process to identify and mitigate risks throughout the lifecycle.
- Data Governance: Training, validation, and testing data must be relevant, representative, and error-free to prevent bias.
- Technical Documentation: Comprehensive records demonstrating compliance.
- Record Keeping: Automatic logging of events (traceability) during operation.
- Transparency: Instructions for use must be clear, enabling deployers to understand the system's output.
- Human Oversight: The system must be designed so that natural persons can oversee its operation (e.g., a "stop" button or approval workflow).
- Accuracy, Robustness, & Cybersecurity: The system must meet high standards of performance and security against attacks.
Strategic Impact: For most enterprises, the "High-Risk" category is the most critical to understand. If your internal tool (e.g., an automated resume screener) falls into this category, you move from a largely unregulated space to a highly regulated one, requiring conformity assessments (often self-assessed, sometimes third-party) and registration in an EU database.

















