See how top teams stay future-ready for audits. 🚀
AI Compliance

Limited-risk AI systems 

"Limited-Risk" is the second regulatory tier of the EU AI Act. Unlike "High-Risk" systems which require conformity assessments and risk management, "Limited-Risk" systems are primarily regulated through transparency obligations to ensure that users are not manipulated or deceived.

This category applies to AI systems where the primary risk is a lack of clarity regarding the nature of the interaction. The core principle is that humans have the "right to know" when they are interacting with a machine or viewing artificially generated content.

Categories & Specific Obligations (Article 50)

The AI Act mandates specific disclosures for four main types of Limited-Risk systems:

  1. AI Systems Intended to Interact with Natural Persons (e.g., Chatbots):
    • Obligation: Providers must ensure the system is designed to inform the user that they are interacting with an AI, unless it is obvious from the context.
    • Goal: To prevent users from mistakenly believing they are speaking to a human agent.
  2. AI Systems Generating Synthetic Content (e.g., Generative AI):
    • Obligation: Providers must mark the outputs (audio, video, text, images) in a machine-readable format so that they can be detected as artificially generated or manipulated.
    • Goal: To combat disinformation and ensure content provenance.
  3. Deepfakes (deployer obligation):
    • Obligation: Deployers who generate or manipulate image, audio, or video content that resembles existing persons, objects, places, or events ("deepfakes") must disclose that the content has been artificially generated or manipulated.
    • Exceptions: Exceptions exist for evidently artistic, creative, satirical, or fictional works, provided the disclosure does not hamper the display or enjoyment of the work.
  4. Emotion Recognition & Biometric Categorization:
    • Obligation: Deployers of emotion recognition systems (that are not prohibited) or biometric categorization systems must explicitly inform the natural persons exposed to them.
    • Note: Emotion recognition in the workplace or education is generally prohibited (Unacceptable Risk), so this "Limited Risk" obligation applies only to other contexts, such as marketing or safety.

Strategic Impact: For most organizations, this category represents the baseline for consumer-facing AI. While the compliance burden is significantly lighter than for High-Risk systems (no extensive technical files or quality management systems required), the transparency requirements are strict. Failure to properly label a chatbot or deepfake can result in substantial fines, emphasizing that transparency is the prerequisite for trust in the European market.

Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo