Written by Adam Koblentz
In my last post, I briefly recapped Microsoft’s recent identity compromise and explored some of the ways that most organizations, including identity and access management (IAM) providers themselves, still have a major gap in their ability to detect and stop account takeovers. In the days since, we’ve learned a bit more about what happened at Microsoft. So let’s take a closer look at the latest developments – and what we can learn from them as the story continues to unfold.
Here is some notable new information about this incident that has trickled out since I shared my initial commentary.
In a blog post, Microsoft’s threat intelligence team acknowledged that MFA was not in use on the legacy tenant that was the initial point of compromise. This was undoubtedly out of compliance with Microsoft’s security policies. But it reflects the reality that even organizations with sophisticated security tools, expertise, and policies can make mistakes that create openings for threat actors.
One of the biggest unanswered questions at the time of my original post was how the threat actors extended their campaign beyond the legacy tenant they compromised to Microsoft’s production email system. Microsoft has since revealed that the threat actors found an OAuth application in the test environment that had elevated access to the Microsoft corporate email infrastructure. They exploited this to create additional malicious OAuth applications and grant themselves the necessary privileges to read user mailboxes in production.
In its original blog post, Microsoft said that there was “no evidence that the threat actor had any access to customer environments.” However, there have since been reports that Microsoft customers have been successfully targeted with similar attacks. The highest profile example is HPE, which reportedly had its Microsoft-based email infrastructure breached by the same threat actor group in a similar manner in May 2023, months before Microsoft’s own breach was detected. Meanwhile, “The Washington Post” reported that it has heard from sources “in and out of government” that “more than 10 companies, and perhaps far more, are expected to come forward” to disclose that they have been affected.
One other notable fact that Microsoft shared is that the threat actors used residential networks as proxies to obfuscate their activities. More specifically, they routed their activity through many different IP addresses to access the initial test tenant and, ultimately, Microsoft’s production email system. Since these IP addresses change frequently – and could also represent legitimate Microsoft users – this made the campaign more difficult to detect using traditional detection and threat hunting techniques.
As I noted in my original commentary, most security-conscious organizations are already using MFA broadly. But as this example shows, MFA and other preventative identity protection measures are not infallible. In addition to this situation, where a policy compliance breakdown occurred, there are many other examples of MFA being successfully defeated through social engineering, SIM swapping, and other methods. This doesn’t diminish the value of MFA. But it does reinforce the importance of combining preventative measures like MFA with identity threat detection and response (ITDR) capabilities that can spot anomalous activity by authenticated identities. This would have allowed Microsoft to detect that their test accounts were being used in abnormal ways much sooner.
The threat actors’ exploitation of OAuth applications illustrates how my point about MFA applies equally to PAM. Like MFA, privilege management is another essential best practice. It’s also something that Microsoft is undoubtedly very good at. The problem is that defenders need to be perfect, while threat actors only need to succeed once to cause severe business impact. So, just as ITDR based on behavioral analytics would have detected abnormal activity originating from trusted identities, it would have also provided the best opportunity to detect anomalistic use of the OAuth applications themselves sooner.
As Alex Stamos noted in his insightful blog post on the incident, Microsoft makes liberal use of the word “legacy” to paper over the process breakdowns that allowed this campaign to occur and escalate. Whether their use of this term is valid or spin, it highlights an important point. Developing a comprehensive security strategy is hard. One of the things that make so hard is considering all of the legacy applications, one-off development or test systems, and shadow APIs that permeate most organizations. Security teams naturally focus the bulk of their attention and resources on the highest-profile targets. But ignoring the long tail of legacy one-offs and shadow APIs that likely exist in most environments isn’t a viable option. Threat actors certainly won’t ignore them. At the same time, their obscurity makes it impossible for security teams to anticipate every potential way that they could be exploited. This is yet another application of, you guessed it, anomaly detection.
In my last post, I noted that Microsoft’s identity-based breach is the second recent instance of a major IAM vendor’s trusted identities being compromised, following a late 2023 breach at Okta. Microsoft and Okta and both world-class IAM providers. But these incidents demonstrate that protection of your trusted identities cannot be 100 percent outsourced to a single vendor. It is notable that the Okta breach was originally detected by one of its customers. And Microsoft is facing criticism about the detail and timeliness of its communication about its identity breach. Does that mean that you shouldn’t rely on Okta or Microsoft as part of your identity protection strategy? Of course not. But it underscores the importance of controlling your own destiny when it comes to understanding how your trusted identities are interacting with your applications.
With the benefit of hindsight, would it have been possible to develop a set of detection rules that would spot an identity-based attack like the one that hit Microsoft instantly? Likely, yes. In fact, Microsoft actually suggests a set of detection measures in their latest blog post. So why didn’t they follow their own advice? The answer is that what is obvious in retrospect is often impossible to predict the first time it happens. Threat actors are maddeningly resourceful when it comes to devising novel tactics to avoid detection. You should of course still try. But relying on pre-defined detection rules to do this likely will cause you to miss novel attacks while simultaneously chasing your tail with false positive alerts. The modern threat landscape requires the ability to understand and baseline normal behavior in your environment and use any detected deviations from the norm as the trigger for further investigations.
We’ll continue to monitor for further developments arising from Microsoft’s identity breach and share any further thoughts. But hopefully, these initial lessons help you broaden your thinking about what it takes to protect your trusted identities and applications against abuse.
Read more here about how Reveal Security can monitor all log activities and detect abnormalities at scale for post-authenticated users in Microsoft 365 and other business critical SaaS applications.
Contact us to request a personalized demo.