What is the cost of poor software quality in the U.S.?
2021-01-06 23:05:00 Author: www.synopsys.com(查看原文) 阅读量:382 收藏

The total cost of poor software quality in the U.S. is estimated at $2.08 trillion. Learn what contributes to the cost and how security can help minimize errors.

cost of poor software quality | Synopsys

Do it right the first time.

That long-standing cliché is based on the premise that it almost always costs more to fix something built poorly than it does to build it correctly.

And if you doubt its credibility, or that it applies to software, check out the latest report from the Consortium for Information & Software Quality (CISQ), in partnership with Synopsys, “The Cost of Poor Software Quality in the U.S.

The short answer: It costs a lot. A mind-numbing amount: An estimated $2.08 trillion in the United States alone over the past year. (For comparison, only a dozen countries have an annual GDP of $2 trillion or more.) And that doesn’t even include “technical debt”— accumulated software vulnerabilities in applications, networks, and systems that have never been addressed. The CISQ report estimates that debt at another $1.31 trillion, and it’s been increasing at a rate of 14% since 2018.

There is some marginally good news: The total cost of poor software quality (CPSQ), though staggering, is down slightly from the amount cited in the organization’s 2018 report, $2.8 trillion. (It has since been revised to $2.1 trillion for that year.)

What contributes to the cost of poor software quality?

The report includes a lot of detail of course, but among the major takeaways:

  • The large majority (75%, or an estimated $1.56 trillion) of the CPSQ is software failure caused by the failure to patch known vulnerabilities. That’s up 22% from 2018.
  • The second-largest piece of the overall cost, $520 billion, is legacy system problems, although that has declined from $635 billion two years ago.
  • In third place is unsuccessful development projects, which cost an estimated $260 billion. And that number indicates an ominous trend—it’s up 46% since 2018. The 2020 report observes that while there are multiple factors causing project failures, “one consistent theme has been the lack of attention to quality.”

top contributors to cost of poor software quality | Synopsys

These data points all support the primary finding of the report, which is that maintaining the quality (which includes security) of software in a DevOps environment that moves lightning-fast is a balancing act.

“Generally, however, we are not very good at balancing. Software quality lags behind other objectives in most organizations,” the report notes.

The result is predictable. “We know that poor-quality software takes longer, costs more to build and to maintain than good-quality software, and it can degrade operational performance, increase users error rates, and reduce revenues by decreasing the ability of employees to handle customer transactions or attract additional clients,” the report continues.

This is a problem well worth addressing, since poor software quality, as obviously expensive as it is now, will become even more expensive for several reasons.

Why is poor software quality becoming more expensive to manage?

First, there’s more bad software out there—a lot more. As has been noted over the past several years, the Internet of Things (IoT) is becoming the Internet of Everything (IoE). And when “everything is a computer,” software is in everything.

As the report puts it, “products and services that were traditionally delivered through other means are now being run on software and delivered as online services with great financial success.” As an example, it points to the fall of Borders and the rise of Amazon.

It also predicts that “while emerging technologies currently account for only 17% of overall global revenue, they are expected to drive nearly half of the growth in new revenue in the coming years.” That means software is not only a crucial element in the success or failure of products and businesses, but of the overall economy itself.

And sucking more than $2 trillion out of any economy is a huge drag on growth and success.

Second, it’s not just that fixing poor-quality software is expensive. It also makes online systems, networks, and products easier to attack, which is another colossal expense. And the so-called “attack surface” is expanding rapidly. The report cites Susie Wee, vice president at Cisco, who said in May 2017 that there were more than 111 billion lines of new software code being produced each year.

According to the National Institute of Standards and Technology, in typical software there are an average of 25 errors per 1,000 lines of code. You don’t have to do the math to know that introduces a massive number of vulnerabilities, since most are the result of simple errors in software coding.

The attack surface is also expanding because of the evolution of technology. Those changes include:

  • Digital transformation: A far greater percentage of business operations, from sales to delivery, is integrated and controlled by software, thus spreading the effects of a malfunction across the value chain.
  • Systems of systems: These expand complexity exponentially and conceal the triggers for huge failures in a thicket of cross-system interactions.
  • Increased competition: That pressure, especially online, has made speed-to-business more of a priority than operational risk and corrective maintenance costs—a huge gamble for systems not designed to expect and manage failures.

So it should be no surprise that the cost of cyber crime is exploding. The report cites estimates that by 2021 it will top $6 trillion. “Ransomware damages alone are now predicted to cost the world $20 billion in 2021,” according to the report, which adds that “what makes most of these attacks successful by outside agents is vulnerable software systems.”

How will “building security in” help?

security helps reduce cost of poor software quality | Synopsys

The report has a number of recommendations on how to balance the need for speed with the security imperative. But overall, “the key strategy for reducing the cost of poor software quality is to find and fix problems and deficiencies as close to the source as possible, or better yet, prevent them from happening in the first place.”

The good news is that the ways to find, fix, and prevent those problems are well established. It takes “building security in” to software while it’s being developed. And an annual report by Synopsys called the “Building Security In Maturity Model” (BSIMM) documents how 130 organizations, primarily in nine verticals, are doing so.

In brief, implementing security measures at every point of the software development life cycle (SDLC) is vital. For the past decade, the mantra for security testing has been to “shift left”—that is, start that testing earlier in the SDLC. But that was never intended to mean shift only left. Testing needs to be done throughout development, not just at the beginning or end.

Security testing measures include:

  • At the start, architecture risk analysis and threat modeling can help eliminate design flaws before a team starts to build an application or any other software product.
  • While software is being written and built, static, dynamic, and interactive analysis security testing can find bugs or other defects when code is at rest, running, and interacting with external input.
  • Software composition analysis can help developers find and fix known vulnerabilities and potential licensing conflicts in open source software components.
  • Fuzz testing can reveal how the software responds when it is hit with malformed input.
  • Penetration testing, or “red teaming” can mimic hackers to find weaknesses that remain before software products are deployed.

To those measures, the report adds:

  • Pay more attention to secure coding practices. Security scores displayed wider variation than those of any other health factor.
  • Analyze source code regularly prior to release to detect violations of quality rules that put operations or costs at risk. System-level violations are the most critical since they cost far more to fix and may take several release cycles to eliminate.
  • Treat structural quality improvement as an iterative process pursued over numerous releases.
  • Eliminate known vulnerabilities (CVEs) and the most egregious weaknesses (CWEs). (See the top 25 CWEs here.) If all new software were created without those known vulnerabilities and exploitable weaknesses, the CPSQ would plummet.

Why is prevention the best investment?

Yes, these measures cost time and money to implement. But the amount of time is decreasing thanks to automation and intelligent orchestration of tools, which help developers by doing the right test at the right time and in the right stage of the SDLC. That means developers can fix bugs and other defects in real time and also avoid getting swamped with false positives or irrelevant defect notices.

And the benefits of those investments are huge. “The biggest bucket of CPSQ that we have identified is in operational failures ($1.56 trillion in the U.S. in 2020),” the report notes. It also points out that it’s about 10 times more expensive to fix the defects that cause those failures after a product has been released than it is to find and fix them during development.

“The biggest bang for CPSQ investment money would be in preventing most of those from occurring as early as possible (if at all), when they are relatively cheap to fix,” the report asserts. Most important, better software security will improve other economic target areas including cost of ownership, profitability, human performance levels, innovation, and more-effective mission-critical IT systems.

The-cost-download.jpg


文章来源: https://www.synopsys.com/blogs/software-security/poor-software-quality-costs-us/
如有侵权请联系:admin#unsafe.sh