Greetings, everyone! We’re thrilled to welcome you to another riveting installment of the SecuriTea Time podcast.
Today, we have the honor of featuring Farshad Abasi, the mastermind behind Forward Security, headquartered in British Columbia. With a wealth of experience in software security, Farshad is a seasoned expert in the field.
SecuriTea Time is your ticket to exploring the captivating journeys of individuals in the realm of risk and compliance. Our guests come from diverse backgrounds, and I must say, delving into their narratives is both enjoyable and profoundly enlightening.
So, grab your favorite brew, and let’s prepare for a discussion that’s as invigorating as it is informative.
For the full podcast, click here.
Now, let’s get right into this exciting episode of SecuriTea Time.
Nicholas Muy: Help us and our listeners learn a bit more about software security.
Farshad Abasi: Software security was often neglected given the traditional focus on network and infrastructure security. However, this has changed due to the advent of APIs and digital transformations. Nowadays, software is a prime target for cyberattackers, especially when it’s exposed beyond a company’s usual boundaries.
Nicholas Muy: Firewalls and infrastructure used to be the main concern for a long time. Today, software takes precedence over infrastructure for many companies, including ours, which is primarily a SaaS business. In the past, older infrastructure required more effort to secure. We use Cloudflare for our front-end WAF. This allows us to focus on areas that may not receive as much attention. What are the typical oversights in software security?
Farshad Abasi: The most significant challenge lies in the fact that security assessments for software are often conducted by professionals with a background in network and infrastructure security. Traditional penetration testing methods, effective for infrastructure, may not work well for new custom software that lacks known vulnerabilities. Many security vendors offer only traditional penetration testing and may not grasp the intricacies of software. This can lead to a false sense of security. To conduct a comprehensive software security test, you need to incorporate source code reviews, design reviews, threat modeling, and manual inspection. Fortunately, guidelines are emerging to encourage thorough software testing, aligning with OWASP recommendations.
Nicholas Muy: It seems there are various aspects to consider when it comes to software security, including manual review, scope testing, source code assessment analysis, threat modeling, and penetration testing. Given these multiple dimensions, is there any guidance on prioritizing among these, considering that not everyone may have the expertise to cover all aspects effectively? Additionally, how can organizations bridge the potential gap between developers and security engineers, who may specialize in different areas of software security?
Farshad Abasi: In terms of prioritizing the various aspects of software security, there are some key considerations. Design reviews and threat modeling are valuable, but they often require skill sets that may not be readily available within a development team. Efforts are underway to create tools that can simplify and democratize these processes, making them more accessible to developers. However, without such tools, options like Microsoft’s threat modeling tool may not be developer-friendly, as they can be time-consuming and produce confusing results.
Based on my experience in application security over the last 15 years, I’d say that, on average, an application typically has about 20-25 threat scenarios. These are not vulnerabilities themselves, but combinations of vulnerabilities that can be used to form an attack pathway. Design reviews tend to uncover around half of these threat scenarios in an average application, with the other half discovered through penetration testing.
Interestingly, high- and medium-risk issues are often found during design reviews, whereas penetration testing tends to identify medium- and low-risk vulnerabilities. This underscores the importance of design reviews in the software security process.
However, it’s worth noting that while design reviews are valuable, they do require a certain level of skill and expertise. Developers can be trained to conduct basic design reviews themselves, which can be particularly beneficial for addressing low-hanging fruit, as roughly 80% of security issues fall into this category. This mentorship and training approach can help developers become more capable of identifying and mitigating security risks within their code.
Nevertheless, it’s essential to recognize that both security and software development are vast domains, and while developers can handle the basics, they may not be expected to reach the expertise level of a dedicated security professional. In those cases, subject-matter experts can still play a crucial role in addressing the remaining 20% of security challenges that developers may not be able to handle on their own.
Nicholas Muy: What strategies can organizations adopt to integrate security seamlessly into their development processes, especially for those that may not have access to a dedicated security team? How can DevSecOps and the security champion model be effectively implemented to address the shortage of security professionals and empower development teams to take on security responsibilities?
Farshad Abasi: The aim is to make security an enabler rather than a blocker, especially considering the shortage of security professionals. The concept of DevSecOps is often misunderstood. DevSecOps is meant to be a cultural transformation that addresses the scarcity of security experts. It’s not about hiring a specific DevSecOps position; it’s about embedding security practices within your existing DevOps teams.
The idea is to enable your DevOps teams to transition into DevSecOps teams by focusing on people, processes, and technology, particularly with an emphasis on enablement. Instead of trying to hire security professionals for each team, you can appoint security champions within your development teams. These champions may not be security experts, but individuals interested in security. They can allocate a portion of their time to learn about and work on security. In this way, they represent security within their teams.
For larger enterprises, the security champion model can scale effectively. For example, in a scenario with a limited internal AppSec team, you can use a third-party supplier like us to support your security champions. The federated model works well to ensure that security is integrated at every level.
To make this approach effective, it’s crucial to empower your development teams. Provide them with the knowledge and tools to perform basic security tasks, such as threat modeling and code analysis. Security activities should be integrated into development lifecycles, whether you’re following Waterfall, Agile, or DevOps. For instance, during design phases, conduct security design reviews, and when writing user stories, perform threat modeling. In coding phases, run code analysis. The key is to adapt these activities to your sprint cycles, ensuring they become a habitual part of your development process.
Nicholas Muy: What strategies can small and medium-sized companies adopt to efficiently address application security without the need for full-time AppSec hires? How can DevSecOps and automation play a role in helping these organizations overcome the challenges of hiring and retaining security professionals while effectively addressing security concerns?
Farshad Abasi: One of the most common mistakes in small and medium-sized companies is the tendency to hire full-time application security professionals. However, for many smaller organizations, this approach is often not efficient. Typically, they may not have enough security tasks to keep a full-time AppSec person busy.
In fact, even in larger organizations like HSBC, the assignment of a full-time AppSec person to every development team proved to be impractical. Through experimentation, it was determined that the average development team needs security expertise for about 10% to 20% of their time. This ensures that security tasks are adequately addressed without overwhelming the team.
Hiring full-time AppSec professionals can be challenging, time-consuming, and expensive. Furthermore, such individuals often leave within one to two years due to a lack of peers to collaborate with, leading to a limited career path within the organization.
The recommendation is that, unless your organization is large enough to support a dedicated AppSec team, you should focus on implementing DevSecOps as a cultural transformation. Enable your development teams to build secure software through automation, best practices, and standards. Resources like OWASP’s Application Security Verification Standard can guide you on building controls into your applications.
Automation plays a crucial role in this approach, but it’s essential to select tools that produce minimal false positives for your tech stack. Benchmark different tools against one another to make an informed choice. Additionally, correlating the results across different security scanners is vital to identify real issues and avoid false positive fatigue. Platforms that aggregate, correlate, and orchestrate the outputs of various scanners can significantly reduce the noise and improve developer satisfaction.
Ultimately, alongside effective tooling, training, and enabling your developers to embed security into their daily practices can address many low-hanging fruit vulnerabilities that commonly appear in application security assessments. Training and enablement can go a long way in eliminating recurrent issues and improving the overall security posture.
Nicholas Muy: How have you seen organizations navigate the world of software security, especially in terms of following best practices like manual reviews, design reviews, threat modeling, and source code analysis? Can you share some stories, either from your own experiences or those of your colleagues, where organizations played ‘cyber roulette’ with software security and how it either succeeded or went awry?
Farshad Abasi: Certainly, a couple of stories come to mind. In one instance, a well-known corporation in British Columbia had a pentest done by another company. The report revealed three vulnerabilities that, upon closer inspection, turned out to be non-issues. These were basic vulnerabilities on their own and couldn’t be exploited to attack the system. However, when I looked at the application myself, I discovered numerous other issues that were not even mentioned in the pentest report. This highlighted the lack of understanding and threat modeling on the part of the testing company, which could have left the client with a false sense of security.
In another case, a prominent financial institution in Canada underwent a pentest by a renowned firm, but the report came back empty after three weeks of testing. The client, suspicious of the results, approached us. When we applied our testing methodology, which included design, threat modeling, and code analysis, we identified over 25 issues, many of which were high and medium risk. This demonstrated a significant gap in the testing conducted by the other vendor, despite their reputation.
I’d like to emphasize the value of threat modeling. Take the Capital One breach, for example. The attacker leveraged two vulnerabilities in succession. First, they exploited an unpatched library, and then they took advantage of weakly configured AWS permissions. On their own, each of these vulnerabilities might not have been considered high risk. However, when combined, they created a threat scenario that allowed the attacker to compromise the system. This highlights the importance of assessing vulnerabilities across application and infrastructure layers and conducting thorough threat modeling to identify potential attack scenarios.
The Marriott and Starwood breach serves as another valuable example. In this case, attackers spent nearly two years conducting reconnaissance to identify weaknesses in the systems. After this extended period, they executed their attack. This underscores the critical importance of logging and monitoring in cybersecurity.
Regrettably, many software teams, following the 80-20 rule, either do not implement logging and monitoring or do so inadequately. Typically, they only save logs to a local file and often overlook their proper configuration. Consider the Marriott Starwood incident; the two-year window for the attackers to explore vulnerabilities was a result of insufficient logging and monitoring.
Effective logging and monitoring are key to identifying suspicious activities early on. For example, failed authorization attempts or access control failures should trigger log events. When someone attempts unauthorized interactions with your application, it is an indicator of potential attacks. Similarly, logging input validation failures is crucial, as it indicates malicious input attempts.
OWASP recommends logging and monitoring four main aspects: authentication success and failure, access control failure, input validation failure, and de-serialization failure. Implementing these four categories and centralizing log data, while correlating it with other infrastructure and environmental events, can provide a comprehensive picture of your system’s security status. This proactive approach can help identify threats and vulnerabilities more effectively, preventing potential security breaches. Unfortunately, many software development teams do not fully grasp the significance of logging and monitoring or fail to employ them adequately.
Nicholas Muy: How do you see the challenge of translating application-specific logs into something meaningful for the security team, given that these logs are often highly specific to the applications within a software company? It seems like there’s a loop of communication where security teams ask for important logs, but the software teams may not fully understand which logs are crucial. Can you share insights on how to bridge this gap effectively?
Farshad Abasi: The standardization of logging and monitoring practices in software security is indeed a critical topic. As of now, there isn’t a universally standardized approach to this, and it often leads to varying practices among different organizations. This lack of standardization can be a challenge for security teams trying to gain insight from the logs.
In this regard, you hit the nail on the head when you mentioned the absence of a standard way of reporting logs. While organizations like OWASP provide valuable recommendations, they don’t prescribe a specific format for these logs. The OWASP Application Security Verification Standard (ASVS) offers controls, including logging success and failure of authentication, access control, input validation, and de-serialization. However, it doesn’t specify the format in which these logs should be stored.
In practice, most developers tend to log data into text files, syslog, or similar formats. The real challenge arises when it comes to ingesting and normalizing these logs, especially for teams using traditional Security Information and Event Management (SIEM) solutions. With these systems, you often need to manually map the logs to understand what they represent and how to correlate them effectively.
In contrast, some organizations, particularly those with substantial resources, opt for solutions like Splunk, which can automate much of this process by intelligently parsing and relating unstructured log data. However, it’s worth noting that such sophisticated solutions come at a significant cost.
For those seeking a practical starting point, I highly recommend exploring the OWASP ASVS gold standard. It provides a comprehensive set of 279 security requirements, but not all of them may be relevant to every application. The ASVS categorizes these requirements into different levels (1, 2, and 3) to help organizations tailor their security controls to their specific needs.
To conclude, implementing standardized logging and monitoring practices in line with recommendations like ASVS can significantly enhance an organization’s software security. It’s crucial to identify the gaps in your existing practices and work toward aligning them with these standards. As a next step, organizations should focus on logging and monitoring and make sure they are doing it appropriately based on OWASP ASVS guidelines. This approach can go a long way toward strengthening software security practices.
Nicholas Muy: Given the importance of white box testing and the extensive efforts attackers invest in understanding their targets, what are your thoughts on how organizations can strike the right balance between black box and white box testing in their security strategies, especially in the context of today’s sophisticated threats?
When it comes to the difference between black box and white box testing, it’s crucial to understand that attackers and professional pentesters operate in distinct environments. Attackers, who typically engage in black box testing, have the luxury of time on their side. In cases like the Marriott and Starwood breach, they spent two years in reconnaissance, attempting to identify vulnerabilities. They operate with the information they can obtain without access to source code, design details, or architectural information.
Now, the important distinction arises when organizations decide to simulate these attacks by hiring pentesters. Many clients insist on replicating the black box approach, arguing that if attackers do it this way, pentesters should too. However, there’s a significant difference: attackers don’t have to worry about time constraints as organizations hiring pentesters do. When organizations opt for black box testing, they effectively need to be prepared to pay for an extended period of testing, possibly years, to match what attackers achieve in their own time.
The alternative approach is white-box testing. When organizations provide source code, design information, and architectural details, the pentesters can work with these assets to evaluate the system. White box testing allows them to be far more efficient. In just a few weeks, they can identify vulnerabilities that an attacker might take years to discover in a black-box scenario.
This is a fundamental distinction. While attackers have unlimited time at their disposal, organizations hiring pentesters need to operate within a fixed timeframe and budget. Therefore, the most cost-effective and productive way to conduct these assessments is by opting for white box testing.
In fact, the Application Security Verification Standard (ASVS) by OWASP highlights this point. While it does provide a black box testing option, it also acknowledges that black box testing becomes less effective within short timeframes. Given that organizations are testing their custom code and applications, it is more pragmatic to open up and provide access to critical information for the testing team. By doing this, the organization ensures that the pentest is both efficient and comprehensive.
The challenge here is that many vendors and clients in the industry may not fully grasp this distinction. Some pentesters may take the IP address from a client, conduct black box testing, and then deliver a report. This situation underscores a significant issue within the industry: a lack of understanding regarding the value and nuances of black box versus white box testing.
Nicholas Muy: How can organizations in the Pacific Northwest region get involved with OWASP and take advantage of the resources and events you mentioned, especially if they are looking to enhance their application security practices?
If you’re in the Pacific Northwest, you might be interested in the annual OWASP Application Security Pacific Northwest conference. It’s held every June, with locations changing yearly. In addition to the conference, there are monthly meetups in various cities with valuable discussions and presentations on application security.
For those looking to start with threat modeling, OWASP provides a great guide and a host of useful, free tools. The OWASP Security Knowledge Framework is a project that helps developers learn application security through modules demonstrating code examples and guides. You’ll also find many open-source scanners that can be integrated into your CI/CD pipeline. For Java users, FindSecBugs is a solid choice. Semgrep, both the community and commercial editions, offers robust SAST capabilities. While SonarQube has a free version, keep in mind that its security checks are in the paid developer version.
Open-source tools are an excellent starting point, even though they may generate some false positives. Consider using correlation, aggregation, and orchestration platforms like the Eureka platform to make the most of open-source products. These platforms allow you to combine multiple scanners and efficiently correlate results to identify real issues.
Starting with Static Application Security Testing (SAST) and Software Composition Analysis (SCA) in your pipeline is usually recommended. Once you have these foundational pieces in place, you can expand to other tools like Dynamic Application Security Testing (DAST). Additionally, Interactive Application Security Testing (IAST) tools, such as Contrast, provide an inside view of your applications during QA testing and can significantly reduce false positives.
In summary, you can begin by incorporating SAST and SCA in your pipeline and then expand to other security testing tools like DAST and IAST, depending on your needs and resources.
Nicholas Muy: Well, it’s clear that engaging with the cybersecurity community and tapping into valuable resources is essential for both personal and professional growth in the field of security. Farshad has shared an extensive list of resources and emphasized the importance of networking and learning from peers. So, listeners, whether you’re looking to enhance your knowledge or strengthen your organization’s security practices, remember that the cybersecurity community is a valuable source of support and expertise.