Securing agentic AI ecosystems
Featuring
Sounil Yu
In this episode of Risk Grustlers, Aayush Ghosh Choudhury, CEO and Co-Founder of Scrut, sits down with Sounil Yu, Chief AI Officer at Knostic and author of the ‘Cyber Defense Matrix,’ to talk about what it really means to run agentic AI safely, starting from “basic” security hygiene and identity hardening, all the way to drawing hard boundaries around what agents can decide, see, and share


Securing agentic AI ecosystems
Securing agentic AI ecosystems
In this episode of Risk Grustlers, Aayush Ghosh Choudhury, CEO and Co-Founder of Scrut, sits down with Sounil Yu, Chief AI Officer at Knostic and author of the ‘Cyber Defense Matrix,’ to talk about what it really means to run agentic AI safely, starting from “basic” security hygiene and identity hardening, all the way to drawing hard boundaries around what agents can decide, see, and share


Description
In this episode of Risk Grustlers, Sounil Yu (Chief AI Officer at Knostic and author of the Cyber Defense Matrix) joins us to unpack how security leaders should really think about AI agents, hygiene, and “knowledge security.” He starts by breaking down cybersecurity across the NIST functions and core asset classes, and why most careers in security are inherently interdisciplinary.
From there, the conversation dives into how to prioritize security hygiene when you’re already stretched thin, what to secure first when AI enters the picture, and why the next frontier isn’t just data security but knowledge security, i.e., controlling what models can infer, reveal, and share about your business.
If you’re building or buying AI agent ecosystems and want a clearer mental model for the risks, this conversation is a must-watch.
Highlights from the episode
- What “dependable” AI really means: judging both the quality of agent outcomes and how responsibly vendors handle your data.
- Why even “basic” security controls stay hard for smaller teams, and which few hygiene practices actually move real breach risk.
- How AI shifts security from protecting data to protecting “knowledge,” and why controlling what models can infer or reveal is the next frontier.
“The problem of hallucinations isn’t just an AI bug; it’s a knowledge quality problem, and that means it needs its own controls, not just better prompts.”
- Sounil Yu, Chief AI Officer, Knostic




















