Live Webinar | 26 June 2025 9AM PT
From Black Box to Boardroom: Operationalizing Trust in AI Governance
HomePodcast

AI With a Pinch of Responsibility

Featuring

Walter Haydock

Taking a slight departure from our regular themes of exploring the journeys of Risk Grustlers, we’re here with an on-demand podcast with the one and only, Walter Haydock, Founder and CEO of StackAware, to demystify and dig into the role of responsibility in today’s AI threat landscape. Walter is a true trailblazer when it comes to solving for AI security. With a profound understanding of AI’s inner workings, he’s the ultimate demystifier of Language Models’ core applications. Join us to tap into his unmatched insights.

Category

Walter Haydock

CEO, StackAware

AI With a Pinch of Responsibility

00:00 / NaN:NaN

Listen on Your favourite platforms

Description

In this episode, Walter gives us a crash course on all things LLM – from listing the  differences between using a self-hosted LLM and a third-party LLM to explaining the top five risks to watch out for while using them.

Application developers are often overwhelmed with the bundle of resources out there, especially when working with LLM-based applications. The OWASP Top 10 and the NIST AI RMF framework, to name just a few – so what should be the key concerns?

That’s exactly what we’re solving here. Tune in to listen to the top 5 concerns that, according to Walter, should be on the top of your list when creating a tool on top of a LLM!

Last but not least, as promised, we are linking the FREE resources down below, so don’t forget to take a look and sharpen your AI security knowledge.

Highlights from the episode

  • Discussing the pros and cons of using an open-source LLM Vs. third-party LLM
  • Decoding the key concerns to look out for when leveraging a third-party LLM to create a tool
  • Understanding key differences between direct prompt injection and indirect prompt injection
  • Navigating the uncertainty of privacy regulations for LLMs in different regions
“Ensuring that you can manage your own infrastructure is really important to hammer down before you decide that you’re going to run an LLM model on your own.”

“Using AI inherently involves a degree of risk. To tread wisely, especially in terms of privacy, the smart approach would be to limit the data you collect and process
Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Join the Unlimited

Get that doubles sales or startups is send a performance

Book a Demo

Share on

Join our community and be the first to know about updates!

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo