Dr. Neil Daswani discusses the root causes of today’s breaches and how the BSIMM can help companies achieve the right security habits.
Dr. Neil Daswani, codirector of the Stanford Advanced Security Certification Program, is coauthor with Moudy Elbayadi of “Big Breaches: Cybersecurity Lessons for Everyone,” released last month by APress.
He is also president of Daswani Enterprises, his security consulting and training firm. He has spent more than 20 years in security, holding research, development, teaching, and executive management roles at Symantec, LifeLock, Twitter, Dasient, Google, Stanford University, NTT DoCoMo USA Labs, Yodlee, and Telcordia Technologies (formerly Bellcore).
Daswani will speak at the Synopsys FLIGHT Europe virtual conference on April 20, and at the BSIMM Europe virtual community conference on April 22. He took time from a rather torrid book tour schedule to speak to Synopsys about his book and the value of the Building Security in Maturity Model (BSIMM) in helping organizations avoid breaches both big and small.
The main reason is that our perception was that most folks don’t know how significant some of these big breaches have been. And we had trouble finding another source where they were explained in something close to plain English.
We came into it from a slightly different angle. Initially the main focus was going to be on the big breaches, plus a little bit of advice—maybe two-thirds breaches, one-third the road back to recovery.
Moudy was also interested in giving advice to directors on what they should be doing to help deal with some of the security posture risks at that level. So we decided to expand the second half of the book to focus on the path to recovery with the right organizational habits. Both of us were fans of Stephen Covey’s work on highly effective habits for people, so we thought we would start with highly effective habits for security and then give advice to boards of directors, CEOs, and technology and security professionals, as well as provide technological countermeasures that one can employ.
Yes, this book is indeed meant to help both non-techie and techie executives, and to help them communicate better with each other. Chapter 10 provides advice to boards of directors, and chapter 11 provides advice to technology and security professionals and executives. We encourage both of those audiences to read both chapters, so they know what to expect from each other.
If we look at what happened after the Enron scandal and the Sarbanes-Oxley regulations that came out after that, there was more emphasis placed at the board level and on audit committees around understanding the financial integrity of systems.
Overall, that’s good. Unfortunately, there has probably been too much focus on compliance and not enough on core security posture. Companies have been doing a lot of these audits, whether it’s ISO 27000, NIST, PCI—there’s a whole acronym soup of compliance standards that one can apply to attest for security—but most companies that have been breached were compliant at the time of the breach. While compliance can help a lot, it’s not preventing more breaches.
So one of the things we do in the book is look at these compliance centers. They are good because they encourage good security hygiene, but with regard to preventing breaches, what we found is that there are really three managerial and six technical root causes of breaches. We encourage folks to focus on those more than on compliance. Compliance is a minimum bar, but our thesis is that if you focus on the root causes, you’ll more-effectively prevent breaches.
You are referring to the six technical root causes, but first let me talk a bit about the three managerial root causes of breaches. They are failures, at the board level, to prioritize, to invest in, and to successfully execute on security initiatives.
But there also has to be good execution, because even in companies where you have the appropriate prioritization and investment in security, companies can get tripped up by the six technical causes.
I’ve interviewed many CISOs and security professionals and asked what they think the root causes are, and everybody is able to name two or three, but one of the contributions of the book is that we studied not only all the mega breaches, but the 9,000-plus reported breaches to date. We came up with a comprehensive list of the six technical root causes: Phishing, malware, software vulnerabilities, unencrypted data, third-party compromise or abuse, and inadvertent employee mistakes. They are behind the overwhelming majority of breaches.
If we look at 99.9% of the breaches and why they happened, you will end up with these root causes. And if you put countermeasures in place for the six root causes, even if a breach does occur, chances are that the so-called blast radius—how many records are stolen—will end up being significantly curtailed. It will definitely take the organization out of the low-hanging-fruit category.
One of the great things about the BSIMM is that it that holistically assesses an organization’s software security practices at multiple levels. It looks at some managerial practices but it also focuses very deeply on what practices an organization has in place to mitigate both first-party and third-party software vulnerabilities. It also has assessments about other root causes.
I’d say the root causes that can be partially addressed by the BSIMM include susceptibility to malware, third-party risk, and unencrypted data. What the BSIMM basically does is identifies the 80-plus software security practices in place at many organizations, and how many of those practices a particular organization employs to protect itself from software-related security issues.
It’s become a cliché that there never seems to be enough in the budget to establish good security in advance, but there is always enough after a breach, when it’s much more expensive to fix it and recover from it. What can be done that isn’t already being done to convince organizations to invest in security up front instead of paying a higher price after a breach?
There’s also a partial tie-in to BSIMM on that. Security departments often try to put more countermeasures in place before a breach. They may or may not be successful, and part of that is due to what financial resources are available. After a breach, typically the breach gets escalated to the board and it then becomes amazingly obvious that a security program can benefit from additional funding. The second the board knows, the CEO has no issues with putting more money into security.
The real challenge is how to be more proactive before a breach or a security incident. One of the great things the BSIMM does is assess the software security practices of a particular organization. You can then compare that organization to others in the same industry or in other industries and you can see how it is doing compared to its peers. That can be used as an argument to invest more in security. The average organization these days can get breached, perhaps pretty easily. So when you’re looking at things like a BSIMM score and comparing yourself to other organizations—depending on the industry of course; high tech and banking may invest more in security than other sectors—one thing to keep in mind is that if the average organization is getting breached, your goal should be to be well above average.
If you had asked me that 15 years ago I would have said maybe we don’t need a government building code, simply because all these breaches hadn’t happened. I might have had more hope that the private sector could self-regulate.
But given the number of breaches that have occurred, I think there should be building codes, especially in certain areas that have to do with human safety, like critical infrastructure and medical devices. I think back to when I was growing up. My father, Mike Daswani, was a mechanical engineer working for an elevator company, and he served on the code committee for elevators. They have a code so elevators don’t fall and hurt people—even when the power goes out, there are all kinds of countermeasures and mechanisms that lock it in place. When I became a software engineer, I thought it was interesting that there were no such codes for software.
Today the IEEE Center for Secure Design has published some building codes for medical devices and for IoT devices, and I think it would be great to see some of the building codes they’ve come up with for those areas get adopted by the government. But I hope that if and when that happens, it’s done in a very mindful way, so we don’t just add more regulation and potential baggage, but focus on things that will actually make a difference.
There is immense pressure in the software industry to develop new products and features, and get them to market as fast as possible. The question is, how do you get things to market in a way that is safe. I think back to Facebook. Mark Zuckerberg’s initial mantra was, “Move fast and break things.” And they did move very fast, they captured the market; they built the world’s most successful social media platform, and there are a lot of positives that came out of that.
Unfortunately there were also some negatives, to the point that when there were enough bugs and security issues, Facebook changed their mantra. “Move fast and break things,” became “Move fast with stable infra,” meaning infrastructure.
With regard to how to address the issue that security slows down development, keep in mind the old adage that haste makes waste. If you move really fast, you may be able to get your software product or feature to market faster, but when there are security vulnerabilities, you are inevitably slowed down by security incidents. You have to triage the incident, understand what data was impacted, understand where the vulnerabilities are and how to patch them—you have to do all that when you could be working on the next feature or whatever.
So there’s got to be a balance between speed to market and security. If you achieve that balance, you can continue moving forward at a pretty darn fast pace while also addressing security risks, so you don’t get slowed down by incidents, compromises, and data breaches.
The first is to be proactive, prepared, and paranoid. Another is to design security and privacy in. Automate as much as you can, and measure your security quantitatively as well as qualitatively. If you look at different parts of the BSIMM framework, they basically assess whether you have software practices in place that help an organization design security in from the beginning. There’s a whole bunch of these practices in the BSIMM that align very well with the seven habits of highly effective security. Employing the BSIMM can help you achieve the right habits.