AI With a Pinch of Responsibility
Featuring
Walter Haydock
In this special episode, we sit down with Walter Haydock, Founder and CEO of StackAware, to unpack what responsibility really means in AI security. From LLMs to real-world threats, Walter cuts through the noise with sharp insights and sharper questions. This one’s for the AI-curious and cautious alike.


AI With a Pinch of Responsibility
AI With a Pinch of Responsibility
In this special episode, we sit down with Walter Haydock, Founder and CEO of StackAware, to unpack what responsibility really means in AI security. From LLMs to real-world threats, Walter cuts through the noise with sharp insights and sharper questions. This one’s for the AI-curious and cautious alike.


Description
In this episode, Walter gives us a crash course on all things LLM – from listing the differences between using a self-hosted LLM and a third-party LLM to explaining the top five risks to watch out for while using them.
Application developers are often overwhelmed with the bundle of resources out there, especially when working with LLM-based applications. The OWASP Top 10 and the NIST AI RMF framework, to name just a few – so what should be the key concerns?
That’s exactly what we’re solving here. Tune in to listen to the top 5 concerns that, according to Walter, should be on the top of your list when creating a tool on top of a LLM!
Last but not least, as promised, we are linking the FREE resources down below, so don’t forget to take a look and sharpen your AI security knowledge.
Highlights from the episode
- Discussing the pros and cons of using an open-source LLM Vs. third-party LLM
- Decoding the key concerns to look out for when leveraging a third-party LLM to create a tool
- Understanding key differences between direct prompt injection and indirect prompt injection
“Using AI inherently involves a degree of risk. To tread wisely, especially in terms of privacy, the smart approach would be to limit the data you collect and process."