What is ISO 42005? The latest guidance for assessing AI impact

Before you launch an artificial intelligence (AI) system, have you ever stopped to ask, “Have we really considered how this might affect the people who interact with it?” That is exactly the kind of thinking ISO 42005:2025 encourages.
As your systems become more powerful, the risks grow too. These risks do not just affect your business. They affect individuals, communities, and society as a whole. And while ISO 42001 gives you a solid foundation for AI governance, it does not tell you how to assess the impact of each system you deploy. That missing link is what ISO 42005 is here to address.
Chances are, you have been focused on building AI that performs. ISO 42005 asks a different question: What kind of impact does your AI leave behind?
In this blog, we will explore what ISO 42005 covers, why it is needed, how it supports ISO 42001, and how CEOs and CISOs can start applying it today.
What is ISO 42005?
ISO 42005:2025 is a guidance standard published by ISO and IEC in May 2025. It helps assess the impact of artificial intelligence (AI) systems on individuals, groups, and society throughout their lifecycle.
This standard is not about certification. You do not need an external auditor. Instead, you use it internally to bring structure to how you evaluate the consequences of your AI systems, from initial planning and design to deployment and monitoring.
ISO 42005 is especially useful when you are unsure how to approach questions like:
- Who might be affected by this system, directly or indirectly?
- When should an impact assessment be triggered?
- What counts as “significant” impact?
- Who should be responsible for approvals and reviews?
It guides you to define thresholds, assign roles, document findings, and update your assessments as the system evolves. This is not a one-time check. It is a continuous process that aligns your AI governance with real-world outcomes.
If your organization is building or using AI systems and wants to make informed, ethical, and defensible decisions, ISO 42005 gives you the structure to do so with confidence.
Why is ISO 42005 needed?
If you are working with AI today, you already know that building a system that performs is only half the job. The more challenging part is ensuring it does not cause harm, exclusion, or unintended consequences once it is out in the world.
But that is where most organizations struggle.
You may have governance policies on paper, but those rarely guide decisions at the system level. Your teams might not agree on when to run an impact assessment or who should be involved. And without a shared process, impact reviews tend to be reactive, inconsistent, or skipped altogether.
You might also find connecting the dots between compliance, product, and risk teams difficult. Everyone has a piece of the picture, but no one owns the full view.
ISO 42005:2025 is needed because there has been no clear, practical framework to help you tackle these challenges. It fills the gap between broad AI governance and system-level accountability, where real-world impact happens.
What does ISO 42005:2025 cover?
ISO 42005:2025 gives you a practical framework for evaluating the impact of AI systems. It does not tell you what is right or wrong. Instead, it helps you ask the right questions and create a consistent process across teams.
Here is what the standard covers:

1. When to assess impact
You are expected to carry out impact assessments at different stages of the AI system lifecycle. This includes during planning, before deployment, and while the system is in use. The idea is to treat impact assessment as an ongoing activity, not a one-time event.
2. What to assess
The standard helps you evaluate how an AI system might affect individuals, groups, and society. This includes both direct outcomes and unintended side effects. You are encouraged to consider fairness, transparency, access, and broader ethical concerns. It also supports defining thresholds for when an impact assessment becomes necessary.
The standard encourages you to go one step further by creating a structured taxonomy of harms and benefits. This helps you evaluate trade-offs more transparently and prioritize mitigation efforts. For example, you might weigh risks like bias or exclusion against benefits like efficiency or expanded access.
3. Who should be involved?
You are guided to identify clear roles and responsibilities. The standard does not prescribe titles, but it expects you to assign accountability for reviewing, approving, and updating each assessment. This helps bring together teams across product, legal, compliance, and risk.
4. How to document and manage the process
ISO 42005 outlines recording your findings, capturing your reasoning, and building traceability into your assessments. You are expected to document outcomes and the evidence and assumptions behind your decisions. This creates a reliable audit trail and helps you review or explain decisions later. It also encourages you to keep your assessments current. If your AI system changes or if new risks emerge, your documentation should reflect that.
5. How to link assessments with decision-making
The standard does not treat impact assessments as isolated paperwork. It encourages connecting them to internal approvals, risk registers, incident response processes, and post-deployment monitoring. In other words, your assessments should influence how you manage the AI system over time.
By covering these areas, ISO 42005 helps create a repeatable, thoughtful approach to AI impact. It is not about slowing down innovation. It is about ensuring your AI systems are designed with built-in awareness and accountability.
How ISO 42005 supports ISO 42001: A complementary relationship
If you are already working with ISO 42001, you know it gives you a strong foundation for managing AI governance across your organization. However, ISO 42001 focuses on the big picture: the policies, procedures, roles, and controls that shape how you manage AI overall.
What ISO 42001 does not give you is detailed guidance on how to assess the impact of a specific AI system. That is where ISO 42005 comes in.
Think of ISO 42005 as a companion to ISO 42001. One helps you build a strong AI management system. The other enables you to apply that system at the ground level.
Here is how they fit together:
You do not have to choose between the two. ISO 42005 works best when you use it within the structure you have built using ISO 42001. Together, they give you a complete toolkit for managing AI responsibly and ensuring each system is held to that standard.
You do not have to choose between the two. ISO 42005 works best when you use it within the structure you have built using ISO 42001. Together, they give you a complete toolkit for managing AI responsibly and ensuring each system is held to that standard.
Why you should care
Whether you are leading product, security, or the business itself, you already know that AI is not just another tool. It is shaping customer experiences, compliance obligations, and long-term reputation. That means you cannot afford to treat its impact as an afterthought.
If you are a CEO, ISO 42005 helps you demonstrate that your organization takes AI governance seriously. It gives you a way to show regulators, customers, and your board that your systems are high-performing, responsible, and aligned with your values. It also helps you make AI part of your ESG, risk, or trust narratives in a real, verifiable way.
If you are a CISO or compliance leader, ISO 42005 helps close a critical gap in your AI risk management. It makes your governance program operational by giving you a straightforward process for reviewing impact, assigning ownership, and documenting decisions. It brings your risk, legal, and product teams onto the same page and keeps them there.
You already know that responsible AI is becoming a competitive advantage. But trust is not built on intentions. It is built on action. ISO 42005 helps you take that next step.
How to implement ISO 42005 in your organization
If you are ready to start using ISO 42005, you do not need to wait for a formal mandate or external trigger. You can begin with a single AI system and build from there. The standard gives you a straightforward process to follow, so you are not left guessing what to do or when to do it.
Here is how you can apply ISO 42005 across the lifecycle of an AI system, from planning to monitoring:

1. Identify systems that require assessment
Use internal criteria to determine which AI systems have the potential to cause a significant impact. This could include systems that affect people’s access to services, involve personal data, or have high levels of autonomy.
2. Define thresholds and triggers
Establish clear rules for when an impact assessment is required. This includes setting thresholds for risk level, system sensitivity, or deployment context.
3. Assign roles and responsibilities
Determine who is accountable for conducting, reviewing, approving, and updating the assessment. Ensure cross-functional representation from compliance, risk, product, and legal.
4. Frame the scope of the assessment
Describe what the system is designed to do, who it interacts with, and what environments it operates in. Clarify its intended outcomes and any constraints.
5. Gather relevant information
Collect technical documentation, data flow diagrams, use case descriptions, and prior incidents (if any). This forms the input for your review.
6. Identify foreseeable impacts
Evaluate how the system could affect users, stakeholders, and society — including direct, indirect, short-term, and long-term effects.
7. Analyze risks and benefits
Weigh the benefits of deploying the system against potential negative outcomes. This includes ethical, legal, reputational, and societal factors.
8. Determine mitigation measures
Identify how to reduce or avoid harmful impacts. This might include changes to design, user controls, transparency features, or manual overrides.
9. Document your findings and decisions
Record what you assessed, how you assessed it, what you found, and what actions you took. Keep your reasoning clear and accessible for review.
10. Get internal approval before deployment
Ensure the assessment is reviewed and approved by designated stakeholders. This builds accountability and alignment.
11. Monitor and review over time
Revisit the impact assessment when the system changes, when new risks emerge, or at regular intervals. Make updates as needed.
Common triggers include changes in the system’s purpose, use of new data types, updates to the model or algorithm, feedback from impacted users, emerging risks, or changes in laws and classification. ISO 42005 expects you to define these thresholds clearly so you are not caught off guard.
12. Link to your broader governance framework
Link your impact assessment process with existing systems you already use, like your ISO 42001 framework, AI risk register, data protection impact assessments (DPIAs), cybersecurity audits, and incident response plans. This ensures that AI risks are not assessed in isolation, but in the same context as privacy, security, and compliance. It helps reduce duplication, improves coordination, and embeds ISO 42005 into your operational reality.
How Scrut helps
You do not need to manage AI impact assessments manually. Scrut gives you the structure and tools to apply ISO 42005 with confidence, clarity, and consistency.
Here is how Scrut supports you at each step:
1. Track your AI systems
Maintain an inventory of all AI systems in use across your organization, along with ownership, lifecycle stage, and associated risks.
2. Run guided assessments
Use built-in workflows to evaluate the impact of each AI system. Scrut helps you answer key questions aligned with ISO 42005, from foreseeable risks to decision accountability.
3. Assign responsibilities
Link assessments to specific team members in legal, product, risk, or compliance. This ensures that everyone knows what they are responsible for and that nothing falls through the cracks.
4. Document your findings
Capture your impact assessments, mitigation actions, and review history in one place. Scrut keeps an audit trail that is easy to review and export when needed.
5. Review and update assessments over time
Set reminders to revisit your assessments as systems evolve. Scrut helps you keep everything up to date without starting from scratch.
6. Connect assessments to broader governance
Link ISO 42005 assessments with your AI risk register, ISO 42001 controls, and incident response plans, all within the same platform.
With Scrut, ISO 42005 becomes part of how you build and operate AI responsibly, not just a theoretical checklist.
FAQs
Do I need to assess every single AI system?
Not necessarily. ISO 42005 encourages you to define thresholds and criteria for what counts as “significant” impact. Systems with high risk or public-facing functions are a good place to start.
Who should lead the impact assessment process?
The standard doesn’t mandate a specific role, but recommends assigning responsibilities across functions. Product, compliance, legal, risk, and technical teams should be involved.
Can ISO 42005 help with compliance under the EU AI Act?
Yes. While not officially harmonized, ISO 42005 aligns closely with Article 27 of the EU AI Act, which requires Fundamental Rights Impact Assessments for high-risk AI systems.
Is this standard only for large enterprises?
No. ISO 42005 is designed to be scalable. Even smaller organizations or startups can use it to bring structure and defensibility to their AI practices.
Can I use ISO 42005 without already being ISO 42001 certified?
Absolutely. While the two standards complement each other, ISO 42005 can be used independently to bring rigor to AI system-level impact assessments.
How is ISO 42005 different from ISO 42001?
ISO 42001 focuses on setting up an AI management system at an organizational level. ISO 42005 dives deeper into assessing the impact of each AI system across its lifecycle. The two are designed to work together.