
<span data-metadata=""><span data-buffer="">Fill in the details to watch the webscast
<span data-metadata=""><span data-buffer="">A Spotlight on Our Guest Speaker
A Spotlight on Our Guest Speaker We’re here with an on-demand podcast with the one and only, Walter Haydock, Founder and CEO of StackAware, to demystify and dig into the role of responsibility in today’s AI threat landscape.
Walter is a true trailblazer when it comes to solving for AI security. With a profound understanding of AI’s inner workings, he’s the ultimate demystifier of Language Models’ core applications. Join us to tap into his unmatched insights.
About the Episode
Walter gives us a crash course on all things LLM – from listing the differences between using a self-hosted LLM and a third-party LLM to explaining the top five risks to watch out for while using them.
Application developers are often overwhelmed with the bundle of resources out there, especially when working with LLM-based applications. The OWASP Top 10 and the NIST AI RMF framework, to name just a few – so what should be the key concerns?
That’s exactly what we’re solving here. Tune in to listen to the top 5 concerns that, according to Walter, should be on the top of your list when creating a tool on top of a LLM!
Some highlights you can’t miss out on!
<span data-metadata=""><span data-buffer="">Words of Wisdom by Walter
Ensuring that you can manage your own infrastructure is really important to hammer down before you decide that you’re going to run an LLM model on your own.
Using AI inherently involves a degree of risk. To tread wisely, especially in terms of privacy, the smart approach would be to limit the data you collect and process