See how top teams stay future-ready for audits. 🚀
AI Compliance

Unacceptable risk AI systems 

"Unacceptable Risk" is the highest risk category defined by the EU AI Act. AI systems falling into this category are deemed to pose a clear threat to the safety, livelihoods, and fundamental rights of people. Consequently, these systems are banned outright from being placed on the market, put into service, or used within the European Union.

Article 5 of the AI Act explicitly lists these prohibited practices. Unlike "High-Risk" systems which are permitted subject to strict compliance, "Unacceptable Risk" systems have no path to compliance (with very narrow exceptions for law enforcement in extreme cases).

Prohibited Practices (The "Banned List")

The following AI practices are forbidden:

  1. Behavioral Manipulation (Subliminal Techniques): AI systems that use subliminal techniques (beyond a person's consciousness) or purposefully manipulative or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm (e.g., voice-activated toys that encourage dangerous behavior in children).
  2. Exploitation of Vulnerabilities: AI systems that exploit the vulnerabilities of a specific group of persons due to their age, disability, or specific social or economic situation to cause significant harm.
  3. Social Scoring: AI systems used by public authorities (or on their behalf) to evaluate or classify natural persons over a period of time based on their social behavior or known/predicted personality characteristics, leading to detrimental or unfavorable treatment in unrelated contexts.
  4. Real-Time Remote Biometric Identification (RBI) in Public Spaces: The use of "real-time" facial recognition by law enforcement in publicly accessible spaces is generally prohibited.
    • Exceptions: Strictly defined, narrow exceptions exist for: searching for victims of abduction/trafficking; preventing a specific, substantial, and imminent threat to life or a terrorist attack; or identifying a suspect of a serious criminal offense (subject to judicial authorization).
  5. Biometric Categorization: Systems that categorize natural persons based on their biometric data to deduce or infer sensitive information such as race, political opinions, trade union membership, religious beliefs, sexual orientation, or philosophical beliefs.
  6. Predictive Policing: AI systems used for making risk assessments of individuals to predict the likelihood of them committing a criminal offense, based solely on profiling or assessing their personality traits/characteristics.
  7. Emotion Recognition in Workplace and Education: AI systems used to infer emotions of a natural person in the areas of workplace and education institutions (except for medical or safety reasons).
  8. Untargeted Scraping (Facial Recognition Databases): The expansion or creation of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

Strategic Impact: For organizations, this list represents the "red line." Developing or deploying any system on this list carries the highest tier of penalties under the Act: fines of up to €35 million or 7% of total worldwide annual turnover, whichever is higher.

Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo