See how top teams stay future-ready for audits. 🚀

Risk Grustlers EP 19 | Securing AI agent ecosystems

Last updated on
December 17, 2025
3
min. read

In this episode of Risk Grustlers, Aayush Ghosh Choudhury, Co-Founder and CEO of Scrut Automation, speaks with Sounil Yu, Chief AI Safety Officer at Knostic, about what it actually takes to secure AI agent ecosystems.

As agents move beyond simple automation and begin sensing, reasoning, deciding, and acting inside real systems, traditional security assumptions start to break down. Sounil draws on decades of experience in cybersecurity to explain why securing agents is not just about adding new controls, but about rethinking mental models, governance structures, and hygiene practices for a world where software can infer, experiment, and make decisions.

Watch the full episode here.

Here are some key highlights from the conversation.

Aayush: When we talk about AI agents today, what actually makes them different from traditional software or automation?

Sounil: The key difference is decision-making. Traditional automation usually handles sensing and acting, and sometimes sense-making. With agents, we are now allowing machines to make decisions. That is a significant shift. Once decision-making is automated, the system stops being purely deterministic. This is where many of our existing security assumptions start to break down, because we now need controls not just around actions, but around how and why decisions are being made.

Aayush: You introduced a useful breakdown of sensing, sense-making, decision-making, and acting. Why is this model important for securing agents?

Sounil: I like using mental models because they help us reason about complex systems. In this case, the four stages matter because each one introduces different risks. Sensing is about data inputs and whether you trust the sensors. Sense-making is about interpretation and consistency. Decision-making is where autonomy enters the picture, and acting is where real-world impact happens. Many organizations already secure sensing and acting reasonably well. The real challenge with agents is decision-making, because once machines decide, you need entirely new safeguards.

Aayush: How do security controls change as you move across those four stages?

Sounil: If a machine is only sensing and acting, you mainly care about trusted inputs and having a clear stop or rollback mechanism. When you add sense-making, you need controls to ensure the system behaves predictably and consistently. But when you allow decision-making, you should treat that system as experimental. That means tighter scoping, stronger oversight, and the assumption that things will go wrong. Decision-making by agents should never be treated as business as usual from day one.

Aayush: Trust is a recurring theme in agentic systems. What makes agent outcomes dependable?

Sounil: Dependability has two parts. The first is reliability. Can you trust the output, whether that is an answer, an action, or an inference? The second is data responsibility. Agents often interact with sensitive or business-critical information. If customers do not trust how their data is handled or inferred upon, adoption will stall regardless of how advanced the technology is. Both aspects are required for trust to exist.

Aayush: Many smaller teams worry that strong security controls will slow them down. What hygiene practices actually matter most?

Sounil: We often say “basic hygiene,” but the reality is that what matters most is also what is hardest to implement. Insurance data consistently shows that identity hardening and enforcing MFA for privileged users have the biggest impact on reducing breaches. These controls are difficult because they require expertise that small teams may not have. In many cases, outsourcing identity management to providers who specialize in it is the most practical way to reduce risk.

Aayush: You compared deploying agents to hiring interns. Can you explain that analogy?

Sounil: If you hire a hundred interns, most of them will fail or make mistakes. A few will do something genuinely valuable. The real value is not the individual actions, but the workflows they discover. Agents should be treated the same way. They are a way to experiment at scale. Once a useful workflow is identified, humans should turn it into a deterministic, repeatable process with stronger controls. Agents are best seen as a discovery mechanism, not a final state.

Aayush: How should governance and oversight evolve when organizations deploy many agents?

Sounil: Humans cannot realistically oversee hundreds of agents directly. In the human world, we use concepts like span of control to manage complexity. Similar structures will be needed for agents. You may end up reviewing manager agents rather than individual ones. The goal is to apply familiar governance principles to a new kind of workforce, rather than trying to invent entirely new management models from scratch.

Aayush: Even with strong access controls, agents can still infer sensitive information. Why is that such a challenge?

Sounil: This is where the DIKW pyramid is helpful. Even if your data security is perfect, agents can infer sensitive knowledge by combining information they are legitimately allowed to access. That creates a new class of risk. Knowledge security is about preventing oversharing and inappropriate inference, not just controlling raw data access. Traditional technical controls struggle here, which is why people and process controls become critical.

Aayush: As a closing thought, how should teams think about setting boundaries for agents?

Sounil: We often say agents need context, but what really matters is need to know. Agents should have discretion only within clearly defined boundaries. Even if an agent technically has access to information, it should not surface it unless it is appropriate for the persona and task. Securing agent ecosystems is ultimately about defining and enforcing those boundaries so autonomy does not turn into overreach.

Liked the post? Share on:
Table of contents
Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Join our community and be the first to know about updates!

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related Posts

Compliance Security
DPDP Rules 2025 explained: Key changes, implications, and compliance checklist
Compliance Security
NIST SP 800-171 explained with a downloadable compliance checklist
Scrut Updates
Scrut innovations: November 2025 snapshot

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo