EU AI Act Banner

The European Union Artificial Intelligence (AI) Act: Meaning, regulatory impact, risks, developments

As artificial intelligence (AI) continues to rapidly evolve, regulators are stepping up to ensure its responsible use. The EU Artificial Intelligence Act (AI Act) is the European Union’s first comprehensive regulatory framework for AI, aiming to ensure that AI systems are safe, ethical, and aligned with fundamental rights.

In this blog, we explore the AI Act’s key provisions, including its focus on general-purpose AI models, risk categories, and prohibited practices, along with its broader implications for organizations navigating the complex regulatory landscape.

What is the EU Artificial Intelligence (AI) Act?

The EU Artificial Intelligence Act, formally known as Regulation (EU) 2024/1689, is the European Union’s first comprehensive legal framework for artificial intelligence. It establishes harmonized rules for the development, placement on the market, and use of AI systems across the EU, aiming to ensure that AI is safe, respects fundamental rights, and upholds the Union’s values. 

Introduced by the European Commission in April 2021 and formally adopted by the European Parliament and the Council in 2024, the Act entered into force on August 1, 2024. Non-compliance can result in substantial administrative fines, including up to €35 million or 7% of a company’s global annual turnover for the most serious violations, such as using prohibited AI systems. 

Providers, deployers, importers, and distributors of AI systems are responsible for compliance. They are expected to implement internal controls, maintain documentation, and ensure conformity before placing systems on the market. 

From the Union’s side, national authorities carry out market surveillance and enforcement actions, while the AI Office coordinates oversight efforts across Member States, especially for cross-border and high-risk AI use cases.

What role does the General-purpose AI model play?

General-purpose AI (GPAI) refers to an AI model that demonstrates significant generality and can competently perform a wide range of tasks, regardless of how it is marketed or integrated into various systems. 

The role of GPAI models is to offer a versatile, scalable foundation that can be applied across industries and integrated into various downstream AI systems. While they may not be inherently high-risk, they can become part of high-risk systems, and providers are expected to cooperate to ensure compliance with the AI Act.

Rules for GPAI models under Regulation (EU) 2024/1689 include:

1. Technical documentation: Providers must create detailed documentation covering the model’s training, testing, and evaluation results.

2. Information for downstream providers: Providers must supply essential information to those integrating the GPAI model to ensure an understanding of its capabilities and limitations.

3. Copyright compliance: Providers must respect the EU Copyright Directive and ensure compliance.

4. Training data transparency: A summary of the content used for training the model must be publicly available.

5. Free and open licenses: GPAI models under open licenses must comply with the above three obligations unless they are deemed systemic.

6. Systemic risks: If the model meets the threshold of over 1025 floating point operations (FLOPs) used in training, it is considered to present systemic risks, requiring additional measures:

Who does the EU AI Act apply to?

The EU AI Act applies to a wide range of actors involved in the development, deployment, and distribution of artificial intelligence systems within the EU market, regardless of whether they are based in the EU or outside. The regulation takes a lifecycle approach, assigning specific responsibilities to each actor based on their role, to ensure that AI systems placed on the EU market are trustworthy and safe.

1. Providers

A provider is any natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark.

Providers are at the core of the EU AI Act’s compliance structure. They are responsible for ensuring that AI systems meet all relevant legal requirements before being placed on the EU market. This includes implementing risk management procedures, drawing up technical documentation, undergoing conformity assessments (for high-risk AI), and ensuring transparency. 

The developments in AI — especially the increasing complexity and autonomy of systems — have made it necessary to hold providers accountable for design choices and training practices from the outset.

2. Deployers

A deployer is any entity or individual that uses an AI system in a professional capacity within the EU, excluding personal non-professional use.

Deployers are responsible for using AI systems in line with their intended purpose and for ensuring that any obligations related to transparency, human oversight, or accuracy are respected in their specific use context. As AI use grows across sectors like healthcare, HR, and finance, deployers play a crucial role in how AI impacts end-users and society. The EU AI Act reflects this by requiring deployers — especially of high-risk systems — to implement appropriate safeguards and monitor AI performance in real-world settings.

3. Importers

An importer is any natural or legal person based in the EU who places on the market an AI system developed by a provider established outside the EU.

Importers act as the bridge between non-EU AI developers and the European market. They are required to ensure that the foreign-developed AI systems comply with EU law before distribution. This includes verifying that conformity assessments have been completed, the necessary technical documentation exists, and instructions for use are available. With the EU AI Act setting a high bar for safety and rights protections, importers share the responsibility of ensuring that AI systems from outside the EU meet these expectations.

What are some prohibited AI practices?

The EU AI Act sets clear boundaries by explicitly banning certain uses of artificial intelligence that pose unacceptable risks to fundamental rights, safety, and democratic values. These prohibited practices are considered so harmful that they are not permitted under any circumstances within the EU.

  • AI systems that use subliminal techniques to distort a person’s behavior in a way that causes or is likely to cause physical or psychological harm
  • AI systems that exploit the vulnerabilities of a specific group due to age, disability, or socioeconomic situation, with the intent to materially distort their behavior
  • Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes (with narrow exceptions such as searching for specific victims or preventing an imminent threat)
  • AI systems used for social scoring by public authorities that lead to detrimental treatment of individuals or groups in a way that is unjustified or disproportionate
  • AI systems that evaluate or classify people based on behavior or characteristics, resulting in unjustified or disproportionate consequences
  • AI systems used by law enforcement to make predictions solely based on profiling, location, or past criminal behavior
  • Emotion recognition systems used in workplaces or educational institutions, except in specific circumstances justified by law
  • The untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases

What are the AI Act Risk levels?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) adopts a risk-based approach to regulating AI, grouping AI systems into four categories based on the level of risk they pose to fundamental rights, health, safety, and society. The compliance obligations for each category vary—stricter rules apply to higher-risk systems, while minimal-risk systems are largely exempt. Understanding these categories is essential for both AI providers and deployers to align with the regulatory requirements.

1. High-risk AI systems

These are AI systems that can significantly impact people’s lives, particularly in safety-critical sectors or fundamental rights contexts. High-risk systems are subject to strict requirements such as risk management, high-quality data governance, technical documentation, human oversight, and post-market monitoring.

Use cases include:

  • AI used in medical devices and diagnostic tools
  • AI for recruitment and employee evaluation
  • Credit scoring or loan approval systems
  • AI in critical infrastructure, like transport or energy
  • AI used in education for student assessments

Organizations developing or using high-risk AI must adopt a robust risk management framework, maintain high-quality training data, document the system’s capabilities and limitations, and ensure human oversight is built into the system. 

Pre-market conformity assessments and continuous post-market monitoring are essential. Cross-functional collaboration between compliance, engineering, and product teams is critical for fulfilling these requirements.

2. Unacceptable-risk AI systems

These systems are considered a clear threat to citizens’ rights and freedoms and are therefore prohibited under the Act. The ban is absolute, with very limited exceptions for law enforcement under strict safeguards.

Use cases include:

  • Social scoring by governments or corporations
  • AI that exploits vulnerable individuals (e.g., children or disabled persons)
  • Biometric categorization based on sensitive characteristics (e.g., race, religion, political beliefs)
  • Emotion recognition in workplaces or educational institutions
  • Indiscriminate scraping of biometric data from CCTV for facial recognition

The only way to comply is to avoid these practices altogether. Organizations must conduct thorough due diligence when designing or procuring AI systems to ensure that none of the functionalities fall into this category. 

If there’s a borderline case, seek expert legal advice early in the development lifecycle to mitigate legal and reputational risks.

3. Limited-risk AI systems

These are AI systems that pose some risk but not enough to trigger full regulation. The main obligation here is transparency—users must be informed that they are interacting with an AI system or viewing AI-generated content.

Use cases include:

  • Chatbots and virtual assistants
  • AI-generated images, videos, or audio (e.g., deepfakes)
  • Recommender systems on e-commerce or media platforms

Transparency obligations include disclosing when content is AI-generated and informing users that they are communicating with a machine. A clear user interface design and accurate labeling can help organizations fulfill these requirements. 

Although the rules are lighter here, ethical use and documentation are still advisable for trust and accountability.

4. Minimal-risk AI systems

These systems are considered low-risk and are exempt from legal obligations under the Act. They are still encouraged to follow voluntary codes of conduct and ethical AI principles.

Use cases include:

  • AI spam filters in email clients
  • AI used for weather prediction
  • AI in video games for character behavior or world generation

Even in the absence of legal obligations, organizations are encouraged to voluntarily uphold transparency, fairness, and privacy best practices. Doing so can future-proof systems and strengthen user trust, especially as public expectations and future regulations evolve.

What are some related laws affecting AI?

While the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the cornerstone of AI regulation in the region, it does not operate in isolation. Several other EU laws complement and intersect with it to ensure a comprehensive regulatory approach. These frameworks address areas such as liability, privacy, product safety, and consumer protection—each playing a critical role in governing how AI systems are developed, deployed, and used.

1. AI Liability Directive (AILD)

The proposed AI Liability Directive aims to harmonize rules for non-contractual civil liability related to AI systems. It introduces a rebuttable presumption of causality to ease the burden of proof for victims seeking compensation for damages caused by AI. However, as of March 2025, the European Commission has withdrawn the proposal from consideration, leaving its future uncertain. ​

2. General Data Protection Regulation (GDPR)

The GDPR (Regulation (EU) 2016/679) governs the processing of personal data within the EU. It mandates that organizations obtain explicit consent for data collection, ensure data accuracy, and uphold individuals’ rights to access, rectify, and erase their data. For AI systems processing personal data, GDPR compliance is essential to protect user privacy and maintain trust. ​

3. Product Liability Directive (PLD)

The revised Product Liability Directive (Directive (EU) 2024/2853) modernizes liability rules to encompass digital products, including AI systems. It expands the definition of “product” to cover software and AI, shifts the burden of proof to manufacturers in certain cases, and allows claims for psychological harm and data loss. This directive ensures that consumers can seek compensation for damages caused by defective AI products. ​

4. General Product Safety Regulation (GPSR) 2023/988/EU

The GPSR, effective from December 13, 2024, replaces the previous General Product Safety Directive. It aims to ensure that all consumer products, including those incorporating AI, are safe for use. The regulation introduces stricter safety requirements, mandates clear product information, and enhances market surveillance to protect consumers from hazardous products.

What’s the current status of the EU AI Act?

In June 2024, the EU adopted the world’s first rules on artificial intelligence. The Artificial Intelligence Act (AI Act) officially entered into force on 1 August 2024, with its provisions coming into effect gradually to give stakeholders time to adapt. While the regulation will be fully applicable 24 months after entry into force, several obligations kick in earlier—particularly those relating to unacceptable risk and general-purpose AI (GPAI) models.

Here’s a quick overview of the key compliance milestones:

  • 2 February 2025: The ban on AI systems posing unacceptable risk begins to apply. This includes systems that manipulate human behavior, exploit vulnerabilities, or implement social scoring.
  • 2 May 2025 (nine months after entry into force): Codes of practice are expected to be adopted for providers of GPAI, offering voluntary compliance guidance before harmonized standards are finalized.
  • 2 August 2025: Rules on general-purpose AI models will apply to new GPAI models placed on the market. Existing models (i.e., those available before this date) have until 2 August 2027 to meet these requirements.
  • 2 August 2026: Rules for high-risk AI systems come into effect. These cover systems used in sensitive areas such as hiring, law enforcement, healthcare, and critical infrastructure.
  • 2 August 2027: Provisions apply to AI systems considered products or safety components of products already regulated under specific EU product safety laws (e.g., machinery or medical devices).

In summary, while the AI Act is already in force, organizations have staggered deadlines depending on the AI system’s category and function. High-risk AI providers, in particular, have a 36-month window to ensure full compliance, ending in August 2027. Until then, businesses, regulators, and civil society groups are preparing for one of the most significant digital policy shifts in recent history.

What are some of the best AI frameworks and standards?

Apart from the EU AI Act, several global frameworks help organizations build, secure, and govern responsible AI systems, such as:

1. ISO 42001

ISO 42001 is a certifiable standard that guides organizations in managing AI risks through structured policies, controls, and continuous improvement.

2. NIST AI RMF

NIST AI RMF is a voluntary framework to help organizations govern and reduce AI risks, focusing on fairness, transparency, and security.

3. OWASP AI Security and Privacy Guide

The OWASP AI Security and Privacy Guide provides actionable best practices to secure AI systems and protect privacy, from threat modeling to incident response.

4. Google’s Secure AI Framework 

Google’s Secure AI Framework emphasizes safe AI development and deployment, with principles like security by design and continuous monitoring.

Futureproof your AI compliance with Scrut

As AI regulations evolve and new frameworks emerge, keeping up—and staying compliant—can feel like a moving target. Scrut helps your organization stay ahead of the curve by centralizing AI risk management, automating evidence collection, and aligning your controls with global standards.

 Whether you’re navigating the EU AI Act or preparing for future audits, Scrut ensures you’re not just reacting to change—you’re ready for it. Schedule a demo today to learn more.

Contact us banner

FAQs

When was the EU AI Act passed?

The EU AI Act was passed on March 13, 2024, by the European Parliament. It was subsequently approved by the EU Council on May 21, 2024. The Act was published in the EU Official Journal on July 12, 2024, and entered into force on August 1, 2024.

Is AI safe?

AI safety depends on how it’s built and governed. Regulations like the EU AI Act set rules for high-risk and generative AI to ensure transparency, accountability, and user protection.

What is the purpose of the European Union?

The European Union aims to promote economic cooperation, peace, and human rights across its member states. The European Commission is responsible for proposing and passing legislation like the EU AI Act.

How are the big AI companies regulated under the EU AI Act?

Big AI companies like Meta, Google, Microsoft, and ChatGPT are regulated based on the risk level of their AI systems. They must comply with the EU AI Act’s provisions for high-risk AI, ensuring transparency, accountability, and safety standards are met.

What penalties can companies face under the EU AI Act?

Companies can face penalties of up to $35 million or 7% of their annual revenue, whichever is higher, for non-compliance with the EU AI Act.

What is the AI Regulation Agreement done by some EU Countries? 

Germany, France, and Italy agreed on AI regulation, focusing on mandatory self-regulation for foundation models and promoting transparency and accountability. The agreement targets all AI providers, including smaller companies, with potential future penalties for non-compliance, while emphasizing the regulation of AI applications rather than the technology itself.

susmita joseph
Technical Content Writer at Scrut Automation

Related Posts

In an era where data serves as the lifeblood of organizations, ensuring […]

When it comes to demonstrating trust and security to customers, the SOC […]

ISO/IEC 42001:2023 is the new kid on the block regarding compliance standards. […]

As artificial intelligence (AI) continues to rapidly evolve, regulators are stepping up[...]

As artificial intelligence (AI) continues to rapidly evolve, regulators are stepping up[...]

As artificial intelligence (AI) continues to rapidly evolve, regulators are stepping up[...]

See Scrut in action!