See how top teams stay future-ready for audits. 🚀
AI Compliance

Model drift 

Model Drift is the phenomenon where the predictive performance and reliability of a deployed AI model degrade over time because the statistical properties of the live, operational data diverge from the properties of the data on which the model was originally trained and validated.

This degradation occurs because the real-world environment is dynamic, not static. Changes in user behavior, economic conditions, software ecosystems, or physical environments can render a model's learned assumptions obsolete. While the term "model drift" itself is not explicitly defined in the EU AI Act, the legal obligation to manage it is unequivocally established through the Act's mandates for continuous risk management, post-market monitoring, and overall system robustness. Unchecked drift poses a direct threat to compliance, as it can cause a once-conformant high-risk system to become inaccurate, unfair, or unsafe.

Model drift is generally categorized into three primary types, each with distinct causes and detection methods:

Concept Drift: The fundamental relationship between the input data and the target variable the model is predicting changes. (e.g., A credit default model trained pre-pandemic fails because the economic factors influencing default have permanently shifted).

Data (Covariate) Drift: The distribution of the input data itself changes, while the relationship to the target remains the same. (e.g., A product recommendation engine sees a shift in the demographic profile of its user base, making its user-preference patterns less effective).

Label Drift: The definition or interpretation of the target variable changes over time. (e.g., Clinical guidelines for diagnosing a disease are updated, making the historical labels in the training data inconsistent with current medical standards).

Regulatory Context: The obligation to address model drift is embedded within the EU AI Act's lifecycle approach. Article 9 (Risk Management) requires continuous risk assessment, Article 15 (Technical Robustness) demands resilience to changes in environment, and Article 61 (Post-Market Monitoring) explicitly mandates the active collection and analysis of performance data to identify any "divergence" from expected behavior, which is the operational definition of detecting drift.

Operational Imperative: Proactively monitoring for and mitigating drift is a core operational requirement for maintaining compliant, valuable AI systems. Automated drift detection tools, scheduled retraining protocols, and clear retrigger thresholds for conformity assessment (in the case of substantial modifications) are necessary to prevent financial loss, erosion of user trust, and the legal liabilities associated with deploying a malfunctioning high-risk AI system.

Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo