Katie Moussouris may not consider herself a “mother” of modern bug bounty programs, but she says “auntie” will do.
Moussouris is the founder and CEO of Luta Security, a cybersecurity company specializing in vulnerability management. But she may be most famous for her work helping major corporations and government entities, including Microsoft and the Pentagon, to build bug bounty programs. The idea was simple: white hat hackers find vulnerabilities, they report them to companies, and those companies pay them — and, importantly, quietly patch the software.
The end goal, in all cases, is to fix the bug before anyone can exploit it. So in 2021, when China announced new regulations requiring private companies to report vulnerabilities to the government before they were patched, Moussouris was concerned. While many government officials and security researchers sounded the alarm over China weaponizing zero-days — as the Atlantic Council looked into for a September 2023 report — Moussouris worried about the precedent.
"The biggest problem with this provision is if other countries start imposing the same requirements on security research," she told the Record at the time.
And her concerns have come to fruition: In 2022, the European Union released its proposed EU Cyber Resilience Act (CRA), which if passed would force companies in the EU to undergo a similar disclosure process. Her larger point is that the more people are aware of a vulnerability, the more likely it is to be exploited. In a recent interview with the Click Here podcast, Moussouris explained why she believes these kinds of regulations will only make the internet less safe for everyone.
“We're not necessarily going to get anywhere with just focusing on an adversarial government's use of vulnerability information," she said. “We really do need to take this as a global issue.”
This conversation has been edited for length and clarity.
CH: This may be only the way I perceive you, but do you consider yourself to be one of the mothers of bug bounty programs?
KM: You know, I think I'm a very involved auntie to the bug bounty programs. [Laughs] I certainly created some of the biggest bug bounties in the world and started multiple governments’ vulnerability disclosure programs. I'm also the co-author and co-editor of the International Standards on the disclosure and vulnerability handling processes. In terms of inventing bug bounties, that happened in the ’90s, and I was still hacking in the ’90s.
CH: So you went from Microsoft and explained to them the importance of this, then Hack the Pentagon, which was incredibly effective. You got these slightly stodgy institutions to accept this way of looking at vulnerability disclosures. What were you hoping to accomplish?
KM: Well, you know, at the time, nobody really wanted to pay for vulnerability information, especially if they were getting it for free. And Microsoft was no different than 99 percent of the companies out there. They actually were much more advanced than most companies at the time in that they had a strong vulnerability disclosure program and a willingness to work proactively with security researchers. What they didn't have was a reason to start paying for what they were getting for free. So in my work in getting Microsoft over the ideological hump of starting to pay researchers, I actually had to do a bunch of data analysis on the bugs we were getting. Microsoft was getting more and more vulnerabilities through third parties like the [Zero Day Initiative] and fewer of the vulnerabilities that it wanted to know about directly. They said, you know what, it might be time for us to start paying to get those bugs directly again.
CH: I realize this isn't exactly the same thing, but what you just described sounds a little bit like what's happening in China with this disclosure law.
KM: I would say that it's different. It does have to do with disclosure, but the Chinese disclosure law is interesting, and I predicted that it would have the current effects that we're seeing in proposed legislation around the world. The vulnerability disclosure law in China boils down to this: you have to let the Chinese government know if your product is vulnerable to something within a few days. And if you don't, there are penalties to be paid. We actually saw some of these penalties enacted on Alibaba, a Chinese company, when some of their researchers discovered the underlying vulnerability that affected log4j, the open-source library. We saw them get kicked out of a vulnerability sharing program or suspended for some period of time for not reporting it to the Chinese government within that deadline period of just a few days. What I predicted at the time was that this was a bad trend of governments trying to insert themselves in the vulnerability disclosure process — where they don't belong — and getting advanced information. And we're actually seeing that process unfold right now in Europe. The European Union has proposed the Cyber Resilience Act. There’s a vulnerability disclosure provision in it that would require any companies selling software in Europe to tell the standards body within the European Union within 24 hours of confirming active exploitation of a previously unknown vulnerability.
CH: That sounds very similar to the Chinese law.
KM: Yes, I think where we're going with this right now is a dangerous place where governments are starting to propose requirements for companies to disclose vulnerability information for which there are no patches yet. That not only fundamentally breaks the need-to-know basis of vulnerability disclosure, but it actually increases risk all around. As more parties know about an unpatched vulnerability, the chances of leakage are extremely high. I don't know if experts like myself are going to be successful in influencing these policymakers away from this idea, but I think that every policymaker and every technologist out there should be extremely concerned about these developments.
CH: It’s beyond leakage, right? Microsoft released a report in 2022 directly blaming this law for an increase in zero-days exploited by Chinese actors. Isn't this a strategy?
KM: Well, yeah, and I think that's exactly why other governments want in on this. There’s a thing called the vulnerability equities process. The US government has it, the U.K. government has its guidelines for it. Essentially, these governments weigh the pros and cons of either keeping a vulnerability a secret so they can use it for offensive purposes, or telling the vendors so they can fix it. And there is an equities process. They ask themselves: What kind of damage would this do to our own critical infrastructure and private industry if we kept this a secret? And I think the concern here is that other adversarial governments can use these things against, in our case, the U.S. and our allies. So everybody wants in on it. But defenders definitely don't want our governments to go there. It will make our jobs that much harder. And in case anyone hasn't noticed, defenders are not winning the cyber wars right now.
CH: Do you think that this completely changes the bug bounty environment?
KM: I think this goes far beyond that. Bug bounties only serve to uncover a fraction of the potential vulnerabilities that are out there. And quite frankly, the way that bug bounties are paid out in terms of criticality, it's a little bit backwards in terms of real-life exploitation. We see way fewer critical bugs being exploited out there as opposed to lower-severity bugs that are easier to find, easier to exploit.
CH: The Chinese aren't the first to weaponize this. The U.S. have been doing it, maybe in a slightly more fair way than the Chinese appear to be doing it now. And now you're saying Europe is getting in on the act, too?
KM: Well, I think fairness is in the eye of the beholder, right? What The U.S. and our allies will say is that we only use vulnerabilities for our strategic and military and intelligence purposes. Whereas, the big criticism of China's use is that they also use it for industrial espionage and IP theft. But from a defender's standpoint, it's all the same. It's all pretty bad. And unfortunately, nation-states are not the biggest worries that we have. We've seen multiple high-profile attacks carried out by teenagers. They're not even necessarily based on vulnerabilities or zero-days. So I guess my broad point here is that we have a lot more to do as cyberdefenders. We definitely do not need additional distractions and new reporting requirements, new deadlines, new agencies to report to. And we don't need an additional source of vulnerability leakage. In trying to one up each other's governments, these regulators are going to lead us to an undefendable state of the internet.
CH: When you saw the Chinese law and this requirement of reporting vulnerabilities, did you think it was sort of inevitable that it would end up sort of weaponizing these vulnerabilities?
KM: I saw it as inevitable that other governments would want to follow suit. And that was what I warned against at the time. It’s that creation of a cascade of vulnerability information that's effectively breaking the ISO standards that I co-authored. There's a reason why those ISO standards said vulnerability information before its patch should be kept to a need-to-know basis, only in the hands of those who are directly responsible for maintaining the code. This is a material break that will erode the cybersecurity of the entire internet.
CH: Can you explain a bit more how this approach goes against the ISO guidelines you wrote?
KM: If you expose vulnerability information to uninvolved parties, you widen the circle of potential exploitation — not just by the parties you've informed but by anybody else who might have compromised those parties. That will increase the chance of leakage. Intentional and unintentional exploitation is exactly what the ISO standards are designed to prevent.
CH: The Chinese government has, at times, made statements that compare these vulnerabilities — whether it's a zero-day or something that's lower-hanging fruit — as a resource, almost like lumber. How do you change that kind of mindset?
KM: I think every government sees them exactly the same way. It's just a matter of how they choose to apply that knowledge. We're not necessarily going to get anywhere with just focusing on an adversarial government's use of vulnerability information. We really do need to take this as a global issue. It's like climate change. Every government needs to take action to prevent catastrophe. This is exactly the same.
CH: And what would you like to see happen?
KM: Well, for one, I would like to see the brakes put on these copycat regulations, such as the CRA, that are requiring disclosure to governments before those vulnerabilities are fixed. Now, the CRA stipulates that they don't want proof of concept code, so they're saying, It's not that dangerous because we're not requiring disclosure of technical details or ready-made exploit code. However, we actually saw this in the WannaCry worm. Once you say that a piece of software or a component is vulnerable, attackers go to town and take a look at that component and then can further exploit that. So even the provision designed to keep that information sharing safer Is ill-informed.
What I would like to see is governments really taking a hard look and [ask]: Does this materially help cyberdefenders? Or does it make their lives more burdened with regulatory compliance? Does it make it harder for them to prioritize their limited cyberdefensive resources? And quite frankly, if we manage to put the brakes on this kind of thing and China is still doing what they're doing, we're not materially disadvantaged. We're in fact preventing material disadvantages from growing out of control.
CH: And how do we know if this isn't working? What do you expect to see if everybody's sort of racing to take advantage of these vulnerabilities?
KM: Well, we've already seen what happens when vulnerability information leaks before defenders can defend themselves. And we've actually seen that happen a lot with “friendly fire.” We've seen that happen with the attacks that were supposed to be targeted on Iran's nuclear program that escaped. And that was allegedly created by Western nations. We haven't gotten our head around the fact that keeping vulnerability information in order to exploit it has a much higher chance of backfiring than any government really wants to admit. I am a hacker. I like to hack things. I like to break things. But as far as I'm concerned, hoarding that information will end up hurting me. If we cannot adapt ourselves to that idea, we are going to see more and more of these friendly fire escapes that cripple the world.
CH: It’s like the Ben Franklin line, right? Three people can keep a secret if two of them are dead.
KM: [Laughs] Something like that. Hopefully nobody dies in this process of vulnerability disclosure. I’m on a federal advisory board under NIST, which is under the Commerce Department. And we recently heard from one of the European commissioners explaining why they thought the CRA was safe. Part of it was that they weren’t requiring deep technical information. We pointed out the WannaCry example with SMBv1. We pointed out the need-to-know basis with the ISO standards. And they seemed to kind of nod their head and say, Yep. But we still think that this will drive better behavior from vendors and make everybody safer. And they just didn't seem to want to hear it from the experts.
CH: And what about the U.S.? Do you think they agree with what you're saying?
KM: It's hard to say, right? Because they're probably waiting for what the final CRA draft will look like. But we expect to see that language finalized in the next couple of months. But what we decided as an advisory board to the federal government was that we were going to explain our concerns and ask if [the U.S. government is] able to influence the European Union on this to bring up these concerns and work this out. We have no idea where the chips are going to fall. But I’m familiar with how the U.S. and these other countries operate. If you make a compelling case that this will actually have an unintended consequence of eroding national security as opposed to bolstering it, you have a shot.
Get more insights with the
Recorded Future
Intelligence Cloud.
No previous article
No new articles
Dina Temple-Raston is the host and executive producer of the Click Here podcast as well as a senior correspondent at Recorded Future News. She previously served on NPR’s Investigations team focusing on breaking news stories and national security, technology, and social justice and hosted and created the award-winning Audible Podcast “What Were You Thinking.”