In the first-ever episode of our SecuriTea Time podcast, we have two special guests joining us from the renowned cybersecurity consulting firm, Kalles Group, based in Seattle.
Our first guest is none other than Derek Kalles, the visionary founder of Kalles Group. With an extensive background in business and technology consulting, Derek has built a company that delivers premier consulting services to safeguard the future of businesses and communities.
Our second guest is Glen Willis, a seasoned cybersecurity and privacy leader with over two decades of experience in the technology industry. Glen has successfully tackled various challenges, ranging from data center operations to strategic governance functions.
In today’s exciting episode, our guests share their valuable insights and expertise on mastering cloud security strategies with our host Nick Muy. From navigating the ever-evolving cybersecurity landscape to tackling the unique challenges of data storage and access in the cloud, they’ve got you covered.
So, without further ado, let’s dive right into this captivating episode of SecuriTea Time!
NM: So, Glen, to start off, what are some of the unique cybersecurity challenges that organisations now face as they embrace the cloud?
GW: Over the years, we’ve realized that we can’t rely solely on the cloud platform’s security. We have to bring our own security approach and make a serious investment in protecting our programs, projects, and operations. Let’s debunk the myth that the cloud is automatically secure. Different cloud services have different considerations. For example, with infrastructure as a service, the responsibility lies largely on you for what you deploy. When it comes to software as a service, it’s crucial to treat it as a third-party risk and apply traditional risk assessment approaches. The challenge lies in adapting your existing team’s expertise to protect the cloud aspects effectively, leveraging the secure tooling provided by platforms like AWS or Azure. The cloud doesn’t change the need for a serious security investment on your part, even as a cloud consumer. It’s essential to address this upfront to avoid playing catch-up.
NM: What are the strategies that people should be thinking about in terms of mitigating and detecting security breaches and unauthorized access to cloud?
GW: Understanding the tooling available with the cloud service you’re using is key. Different cloud providers excel in different areas of security tooling, so it’s crucial to have architects on board who can guide you through the offerings and their capabilities.
Now, staying up to date with non-native tooling is equally important. Often, these tools outshine the native ones, unless you have deep expertise specifically in AWS or other platforms. Non-native tools focus on security as their core competency and can provide functionalities beyond what the native tools offer.
It’s all about knowing your priorities, understanding your real requirements, and aligning your tooling choices accordingly. It’s not just about the tools themselves. You need the expertise and proficiency to fully leverage the value of the chosen tooling.
NM: Derek, I think it’d be interesting to get your thoughts on what zero trust means for organizations moving to cloud or in cloud now.
DK:It all starts with understanding what you have and assessing your needs. When it comes to zero trust, it’s important to look at how it impacts your people, product, and customers before diving into segmentation strategies, privileged access, and monitoring.
Zero trust isn’t a magic solution that makes everything easier. It’s about improving your security posture dynamically and allowing your security professionals to focus on the right things within the constraints of capacity and resources. Building big walls and relying on dragons may work in Game of Thrones, but in the real world, we need a different approach.
Zero trust is a journey, a mindset, and an operating model that wraps around your technical and staffing choices. It’s about leveraging native toolsets, activating them with rigor, and utilizing technologies to streamline laborious processes. This way, your security team can focus on data analysis, incident response, and preventing bad things from happening.
NM: Glen, I’d love to hear your thoughts on incident response and disaster recovery in the cloud.
GW: Planning is crucial, but you can’t plan for everything, so good prioritization is key. And if you have an untested solution, it’s important to have backups, especially in the cloud. However, it depends on your specific technology setup.
The resilience and adaptability of your deployed cloud systems matter. So, the principles stay unchanged. Identify the top potential incidents based on your business and tech profile, create a playbook, and practice through exercises. Learn from those exercises and incorporate the lessons. Also, understand the capabilities and limitations of the cloud platform. Make sure your technology is equipped to handle failover and recovery events effectively.
It’s important to understand the capabilities and features of the cloud platform you’re using and test them collectively. Many organizations struggle to prioritize disaster recovery (D.R.) or business continuity (BCP) exercises because other work takes precedence. It’s tough to make time for it and have those conversations, but it’s critical to incorporate it into your yearly plan.
You need to hold yourself and your team accountable for following through on what you said you would do. If you haven’t started or you’re close to the deadline, you have to figure out how to handle it.
NM: Derek, what are some important things to keep track of? I would venture to guess that companies are doing a lot of these BCDR exercises maybe for specific regulatory requirements.
DK: Having a simple framework is crucial when making prioritization choices, whether it’s about security, business continuity, or resilience. In my opinion, the simple lanes to consider are revenue, people, customers, and regulatory aspects. That regulatory piece keeps expanding and growing in depth. What I want to emphasize is that regulatory influence is increasing, like with GDPR and various states creating their own standards.
Compliance and privacy are significant factors. Customers are increasingly demanding privacy and protection. Ultimately, regulatory agencies aim to safeguard individuals or groups. As leaders, we should acknowledge that there’s innovation happening upstream in these processes, and there are pragmatic steps we can take. You might wonder if there’s room for discretion when dealing with regulatory organizations, and the answer is yes. It’s a complex matter, and Glen, perhaps you can touch on a few good first steps. We often advise organizations to bring pragmatism into their approach, considering their capabilities and alignment over time. Before diving into unified compliance, risk frameworks, and advanced processes or technologies, it’s important to ground yourself in understanding what you have and how you can align with regulatory requirements.
GW: There are many gotchas to watch out for. For example, some people assume that if a regulation requires a secondary site to be a certain distance from the primary site, being in the cloud automatically fulfills that requirement. But that’s not always the case, right?
You need to dig deeper and find out if your contract or the mechanisms in place actually provide that. Don’t make assumptions about the location, thinking it’s automatically different from running everything in your own data center. That’s a key point for me. Focus on your top risks, while still keeping your team aware of a broader set of risks. When it comes to vetting all these regulatory aspects, you want your team to challenge assumptions and ensure they truly understand what they’re getting and what they’re not getting from the service contract with your cloud provider.
NM:What are some relevant insights or experiences you’ve gathered from your clients, both past and present, that highlight the importance of pragmatism in cloud security?
GW:So, if someone asked me for two practical, actionable things, here they are. First, when it comes to testing and exercising, don’t just focus on incident scenarios where attackers try to breach your system. Also, test your ability to respond to a zero-day exploit. It’s not typically considered an incident, but it’s crucial to have a strong response capability. In the cloud, this becomes trickier because our cloud assets are more ephemeral and attractive targets. Can you quickly identify vulnerabilities and respond effectively? Practice and test this aspect because it’s critical.
The second pragmatic point is about project lifecycles and security. In the cloud, the goal is to enable project teams to work faster and deploy resources as needed. But at what point should security functions come into play? Is it during QA or later? You don’t want to burden teams with a long list of security requirements on day one, but if you wait too long, you’ll face rework and delays that aren’t practical. Find the right balance and engage project teams to ensure security considerations are integrated smoothly.
These two aspects—testing the ability to respond to zero-day exploits and aligning security with project timelines—are often overlooked but highly important. So, if I were new to cloud security, these are the areas I would prioritize and keep an eye on.
DK: One thing I want to add to the discussion is the importance of tending to the process and the ongoing maturation within your organization. It’s not just about achieving specific regulatory compliance or privacy outcomes. It’s about actively managing the process and investing in automation.
There’s a recurring theme I’ve noticed, where organizations are becoming more intentional in managing their processes and journeys, encompassing people, processes, and technology. It’s about ensuring successful engagement across the organization and avoiding being left out or blindsided.
We’ve all experienced situations where it’s like, “Hey, we’re deploying tomorrow. Can your team handle it tonight?” or “Oh no, something just happened. Fix it!” Hopefully, we can all smile or chuckle at those moments because we’ve been there.
But when security leaders take the time to tend to the process and educate others, even if it means slowing down a bit and implementing necessary gates, they ultimately achieve better results.
NM: Derek, how do you ensure that people understand the direct impact of security on customers?
DK: As a security service provider, customers often want to know what’s new, exciting, or the latest shiny object, whether it’s about productivity or vulnerabilities, right? But stepping away from the Jack Bauer or James Bond moments, what we actually see are the failures where the basics weren’t being done. It’s unfortunate and challenging. It could be a Zero Day incident, some other type of security breach, or issues with continuity and resilience.
That’s why it’s crucial to anchor leadership and remind them that focusing on the basics forms the foundation for improving your security posture. Understanding the operational flows of your security engine and dialing in the right level of maturity are key. This allows you to then delve into the more exciting and shiny areas that truly advance your security posture.
However, I have to emphasize that many failures stem from neglecting the basics. It’s essential to gather everyone in a room and get them aligned, not necessarily for a Casino Royale scenario, but to establish a solid response playbook and address the fundamental elements of operational compliance. We must ensure that people understand how security directly impacts our customers.
That’s it for this episode’s highlights! Stay tuned for the highlights of our next episode where we’ll once again dive deep into the world of cybersecurity and compliance.