See how top teams stay future-ready for audits. 🚀
AI Compliance

Adversarial robustness 

Adversarial Robustness is a mandatory technical characteristic under the EU AI Act, requiring high-risk AI systems to be designed with an appropriate level of resilience against both unintentional perturbations and deliberate, malicious attempts to manipulate their inputs in order to cause errors, malfunctions, or harmful outputs.

This requirement addresses a unique vulnerability of AI systems, particularly machine learning models, where small, carefully crafted changes to input data, imperceptible to humans, can lead to dramatically incorrect and high-confidence predictions. In safety-critical contexts like medical diagnostics or autonomous driving, such vulnerabilities could be exploited with catastrophic consequences. Adversarial robustness is therefore not merely a cybersecurity add-on but an integral component of the overall "technical robustness and safety" mandated by the Act, ensuring systems perform reliably even under hostile or unexpected conditions.

Achieving adversarial robustness involves a combination of defensive strategies throughout the development lifecycle:

Adversarial Training: Exposing the model during training to intentionally perturbed or "adversarial examples" to teach it to be invariant to such manipulations.

Robust Model Architectures: Designing or selecting model architectures that are inherently less sensitive to small input variations.

Input Validation & Sanitization: Implementing pre-processing checks to detect and filter out anomalous or potentially malicious inputs before they reach the core model.

Detection Mechanisms: Deploying monitoring systems to identify when the model is under attack based on patterns in its inputs or the statistical properties of its outputs.

Continuous Stress Testing: Incorporating adversarial testing (red-teaming) as a core part of the validation and post-market monitoring process to discover new vulnerabilities.

Regulatory Context: Article 15 of the EU AI Act explicitly requires high-risk AI systems to be "resilient as regards to attempts by unauthorised third parties to alter their use or performance by exploiting system vulnerabilities." This directly mandates protection against adversarial attacks. Furthermore, the conformity assessment (Annex VII) requires the technical documentation to include information on "measures taken to ensure the robustness of the AI system."

Safety and Security Imperative: For high-risk AI, adversarial robustness is a non-negotiable aspect of product safety. A lack of robustness exposes the system to sabotage, fraud, and safety failures. Demonstrating rigorous testing for adversarial vulnerabilities is increasingly expected by regulators, insurers, and customers as a baseline condition for trusting and deploying AI in consequential domains.

Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo