In this episode of Risk Grustlers, Nicholas Muy (CISO at Scrut Automation) sits down with Sandip Wadje (Managing Director – Global Head of Emerging Technology Operational Risks & Intelligence at BNP Paribas) for a grounded conversation about what AI adoption is actually surfacing inside organizations. The pressure to move fast is real. So is the productivity upside. But once teams begin rolling out tools like Copilot, the gaps they could ignore before start showing up everywhere.
Sandip breaks down why AI governance is no longer just a policy discussion. It is now a practical, cross-functional problem that touches data classification, access management, internal controls, and business enablement all at once. The conversation also gets into what smaller organizations should prioritize first, why rigid control design can backfire, and why leadership teams need to spend more time understanding what AI will actually change in their day-to-day decisions.
Listen to the full episode here.
Here are some key highlights from the episode.
Nick: Where do you think AI governance stands today, especially after the last few years of mass GenAI adoption?
Sandip: AI governance has been evolving for some time, but GenAI changed the pace of the conversation. What made it different was how quickly it became tied to productivity. Suddenly, business teams saw a tool that could move work faster, create obvious gains, and justify more aggressive adoption. That is where governance became more complex.
Once business pushes forward, legal, risk, compliance, IT, and security all have to deal with the implications. It is not only about enabling the technology. It is about facing the areas the organization had not really cleaned up before, like data classification and access control.
Nick: What are you hearing from peers who are trying to roll out tools like Copilot?
Sandip: A big part of the conversation is around how these tools inherit permissions and make oversharing harder to ignore. Data classification, which many organizations had tools for but never treated as urgent, is now becoming mandatory.
Leadership sees the value of the AI tool and starts saying, ‘If 100 percent classification is what it takes, then let us do it.’
The problem is that this is not something IT can just switch on overnight. It takes coordination with business, data protection teams, and data leaders. And even then, there are edge cases everywhere, especially with older files, downloaded documents, and PDFs that were never classified in the first place.
Nick: So the challenge is not just about labeling data. It is also about understanding who can really access what.
Sandip: Exactly. One issue AI is exposing is that role-based access on paper often does not match actual entitlements in practice. Managers may believe they are certifying access correctly, but the true permissions a person ends up with can be very different.
That becomes much harder to ignore once Copilot starts surfacing information people were never expected to see. So the spotlight is not just on data. It is also very much on access management.
Nick: For smaller teams that do not have huge budgets or large enterprise security orgs, what should they focus on first?
Sandip: First, they need to rethink what confidentiality actually means in a post-GenAI environment. Something an organization once treated as highly proprietary may no longer deserve that label if a GenAI tool can recreate it quickly. That does not mean nothing matters. It means teams need to be more honest about what their real crown jewels are.
Second, if you are in a regulated environment, do not create internal controls you are not going to follow. That becomes a problem very quickly when regulators ask what standards you operate by and then compare that to what is happening in reality.
Nick: That point about controls feels especially important because a lot of teams create idealized controls and then fail them in practice.
Sandip: That happens all the time. Organizations draft impressive-sounding baselines and then get stuck defending them when the real operating environment looks nothing like the control language. If you say data must live in one place and then AI workflows cause it to spread across systems, you are left explaining why your own control no longer reflects reality.
So one of the most practical things teams can do is revisit their controls in light of AI. Ask whether those controls still make sense, whether they are achievable, and whether the evidence would actually hold up when someone asks to see it.
Nick: You also made the point that being too restrictive can create a different kind of problem.
Sandip: Yes, because if organizations do not create a safe way for employees to use these tools, people will still find ways to use them elsewhere. That is the reality. So teams need to be pragmatic. Protect the true crown jewels, but do not pretend blanket denial will solve the problem.
What works better is creating what I called a collaborative kitchen, where people can use AI in the right way instead of pushing that activity into the shadows. Otherwise, the organization ends up breaking its own rules without even realizing it.
Nick: What should leadership teams be paying more attention to as AI keeps evolving?
Sandip: One big issue is education. The industry is trying to catch up to a technology journey it did not really follow step by step. Many leaders use tools like ChatGPT, but that does not mean they understand the foundations, the tradeoffs, or even concepts like inference well enough to govern responsibly.
That creates a gap between excitement and understanding. So there is a real need for executive education, not in the abstract, but in terms of what AI means for the business, for risk, for decision-making, and for the specific role a leader holds.
Nick: If you had to leave listeners with one final thought, what would it be?
Sandip: Ask what AI means to your business and to your role specifically. Whether you are a Chief Risk Officer, CIO, CISO, or another decision-maker, think about how AI will change the way your job works over the next two or three years.
That reflection gives you a much better starting point than jumping straight into a giant framework or trying to copy what a much larger organization is doing. The goal is not to be perfect. The goal is to understand the shift clearly enough to respond in a way that is realistic, flexible, and usable.
This episode is a useful reminder that AI governance is not separate from the rest of the business. It is showing teams where old assumptions no longer hold, where controls have become too rigid, and where collaboration needs to get better fast. That is what makes this conversation worth paying attention to.




































