Minimal-risk AI systems
"Minimal-Risk" is the lowest risk category defined by the EU AI Act, encompassing the vast majority of AI systems currently in use. These systems are considered to pose no significant threat to the health, safety, or fundamental rights of citizens.
Because the risk is negligible, the EU AI Act imposes no new mandatory obligations on the providers or deployers of these systems. They can be developed and used freely, subject only to existing general laws (such as GDPR for data privacy, consumer protection laws, and non-discrimination statutes).
Characteristics & Examples
This category functions as the "default" classification for any AI system that does not fall into the Unacceptable, High, or Limited risk categories. Common examples include:
- Spam Filters: AI used by email providers to sort junk mail.
- Video Games: AI-driven non-player characters (NPCs) or procedural level generation.
- Inventory Management: Algorithms used to optimize supply chains or predict stock levels.
- Search Engines: Standard information retrieval algorithms (provided they don't manipulate behavior or present systemic risks).
- Predictive Maintenance: Systems that analyze machinery data to predict when parts need replacement.
Voluntary Codes of Conduct (Article 95)
While there are no legal mandates, the EU AI Act encourages providers of Minimal-Risk AI systems to voluntarily adopt Codes of Conduct.
- Purpose: To signal a commitment to ethical AI and environmental sustainability (e.g., energy-efficient training).
- Content: These voluntary codes often mirror the requirements for High-Risk systems (like transparency or human oversight) but are applied at the provider's discretion to build consumer trust.
Strategic Impact: For businesses, identifying a system as "Minimal-Risk" is the ideal outcome of an AI regulatory assessment. It allows for rapid innovation and deployment without the heavy administrative burden of conformity assessments or transparency notices. However, organizations must remain vigilant: a "Minimal-Risk" system can evolve into a "High-Risk" one if its intended purpose changes (e.g., using a video game's AI engine to train military drones).

















