Misuse of security tools can lead to defect overload for development teams. Knowing when and how to use these tools will yield more effective DevSecOps.
It is a long-time mantra of security experts: There is no single, magical software testing tool or technique that will find every defect or flaw that developers should fix when they are building an application or any of the many things powered by software.
It takes multiple tools, deployed at different times throughout the software development life cycle (SDLC).
But if those tools aren’t used correctly, at the right time, and in the right way, they can flag an overwhelming number of potential vulnerabilities, many of them insignificant or irrelevant to a particular project. And that can frustrate development teams to the point that they could start ignoring the warnings or even disabling the tools, undermining the security those tools are meant to enhance.
That, according to Meera Rao, is one of the biggest challenges of embedding security into DevOps and yielding effective DevSecOps.
Rao, senior director for product management (DevOps solutions) at Synopsys, notes the reality that “at every stage in the pipeline or even in your SDLC, you have many security activities to perform, and each and every one of them gives you vulnerabilities. That can lead to defect overload.”
By now, that list of devsecops testing tools and other security tasks is fairly standard. At the start, security teams should conduct threat modeling and risk analysis based on what an application is expected to do and what kind of input, if any, it will handle. Obviously, a page on a website that accepts user input including personal and financial data needs more rigorous security than one that simply provides information, such as the locations of company offices.
During the coding and building phases, automated tools like static, dynamic, and interactive analysis can flag bugs and other defects that could be exploited. Fuzz testing can check how the software responds to random, malformed input. Software composition analysis (SCA) can help find open source components that may have security defects and/or licensing conflicts.
And at the end, penetration testing is designed to attack an application the way hackers might, to find any remaining critical weaknesses before that application goes into production.
All those tools and techniques are crucial to building security into an application during its development. But if those tools are aren’t configured to flag only defects that are relevant and significant to a specific project, they can end up creating friction that slows development—which is the last thing a development team wants in a DevOps world where speed is a top priority.
The solution? Vulnerability management. But the frequently conflicting priorities of speed and security present multiple challenges to doing that effectively.
One is that while it’s possible to configure a tool so it flags only defects considered critical, “if you have hundreds of projects, there’s no way you can configure all the projects,” Rao said. That’s because each project will have different exposures and therefore have different things that are considered critical. “It takes skilled resources—it takes time to go to each and every project that you have onboarded and configure each one to only give critical vulnerabilities,” she said.
Another challenge is that, as noted earlier, there is no one tool that can find every kind of vulnerability. As Rao put it, “Static analysis security testing (SAST) finds certain things, dynamic analysis security testing (DAST) finds different things, and SCA finds different things.” And it’s not just SAST, DAST, and SCA tools—there are others used at different stages in the SDLC as well.
Yet another challenge is that “each and every tool or activity in the pipeline requires different skill sets, different artifacts, and a different amount of time,” she said. “If I have to do threat modeling, that’s a completely manual effort and you need skilled resources to do it. Same thing with risk analysis. Whereas with static analysis you have a tool, you configure it, and then you can automate. But then again, static analysis isn’t going to solve all the problems.”
“To achieve 100% coverage of vulnerabilities in your source code, your open source, and all the infrastructure requires too many activities, skill sets, amounts of time and discovery methods.”
So for development and security teams that need to work together in a DevSecOps environment, the obvious question is: How can we manage what sounds like the digital version of herding cats?
It takes effort and organization.
To start, it’s crucial to know the “risk profile” of an application. As noted earlier, an application that accepts and processes user input needs more rigorous security than one that simply provides information.
Then it’s important to understand what finds what—what kinds of vulnerabilities different tools find. Unfortunately, many development teams don’t know what individual tools do, so they simply run all the tools all the time, according to Rao.
Understanding what the tools do, and that all of them might not be necessary on a specific project, can help developers start to chip away at what she calls “defect overload.” For example, if cross-site scripting (XSS) and SQL injection are considered critical vulnerabilities in a given application, the team can configure its SAST tool to look for those and ignore the rest.
If SAST does find one or both, “I take those as a payload to my DAST or interactive application security testing (IAST),” Rao said. “I configure those tools so they don’t flag all the thousands of other issues they could find but just focus on these two. And if DAST and especially IAST finds them with very high confidence, I should be able to create a ticket in whatever defect-tracking the organization uses—maybe Jira.”
The process would be entirely different if threat modeling and risk analysis find a design flaw, such as an application being built in the wrong way.
“If I was supposed to use hashing for passwords in an API but used encryption, none of the testing techniques will be able to find that,” she said. “So if I check in the code and say that I fixed the API, the pipeline should be smart enough to say that I need to do a manual code review, or I’m going to let someone know that this critical API was changed and someone needs to take a look at it.”
With the right combination of DevSecOps tools and other defect-discovery methods applied to the risk profile of an application, “you can get the ROI for automating certain activities and doing other activities that are out of band,” Rao said.
She said an example of how not to do it was a development team that was building a back-end messaging API that used no database and didn’t have a front end, yet the team was still testing on the Open Web Application Security Project (OWASP) Top 10—a well-known and valuable list of crucial vulnerabilities, but not relevant to this specific project. “What’s the point of that?” she said.
“It’s important that whatever you do in the planning phase feeds into all the other testing activities that you do. If there’s no database on the back end, there’s no point in looking for an SQL injection.”
Finally, it’s imperative to avoid duplication—it will overwhelm developers if multiple discovery methods like SAST, DAST, and penetration testing are all reporting the same defects and creating defect tickets for all of them.
One team was so frustrated that its members “went into the static analysis dashboard and marked everything as false positives, because for them it was finding way too many things for them to deal with,” Rao said.
That “workaround” got exposed when, toward the end of the SDLC, a pen testing team found some critical XSS vulnerabilities—the kind of defect static analysis will find. The organization’s software security group came back and asked if the team wasn’t running static analysis, because the tool is really very good at finding XSS vulnerabilities.
“We were shocked,” Rao said. “But then when we looked at it, the tool had found it but the team had marked it as a false positive.”
That was solved when the security team added a control that would flag any critical vulnerability that was marked as a false positive.
But that situation illustrates the defect overload point very well—if security tools aren’t configured and managed properly, they will overload development teams to the point where they will ignore, or even block, warnings. That doesn’t improve security. It undermines it.
“The bottom line is to help companies address defects that are critical without overwhelming them with defects that aren’t critical to them,” Rao said. “That way it doesn’t slow development so much that everyone throws their hands up and says forget it.”