Here's what you can do about them to keep using MFA to protect your own logins
A years-long research effort between computer scientists at Stony Brook University and private industry researchers have found more than 1,000 new and more sophisticated phishing automation toolkits across the globe. What's interesting about this effort is these tools can help subvert the multi-factor authentication (MFA) of just about any website using two key techniques, man-in-the-middle (MITM), and reverse web proxies. Let’s talk about how the attack works, how these tools were found in the wild, and what you can do about them to keep using MFA to protect your own logins. The typical MFA routine uses an additional one-time code (usually six digits) that will only work for a minute when it is requested by a user trying to complete a login to their account. The code is sent via a variety of methods — we've previously told you why using SMS is not as good as using a separate MFA smartphone app. What phishing automation toolkits do is intercept this code, either by stealing a cookie on your computer or tricking you into sending the code to the attacker when you think that you're typing it into the legitimate place as part of your login process. This article explains the difference between the two methods, and it also includes a video in which the researchers who discovered the toolkits present their findings. The phishing tools bring an impressive amount of automation to bear: they can easily fetch static copies of current web pages from targeted websites, serve them to victims, and prevent detection through cloaking mechanisms — all while requiring minimal effort by the attackers. The new version uses malicious reverse proxy servers which forward requests and responses between the victim and the target web server, while extracting credentials and session cookies used in the authentication process. This has two important advantages: first, the attacker doesn’t have to worry about keeping the phony website up to date, since the “real” target website is being seen by the user. Second, automation replaces the need for manual communication to obtain the one-time MFA passcode. Reverse proxy servers have been around almost as long as the original web servers and are used in a variety of legitimate ways to simplify security and balance web server traffic loads. Popular open source versions include Squid and Nginx, and they are incorporated into a variety of commercial cloud access security brokers as well. But like anything else on the internet, there are ways to use these servers both for good and for bad. One of the more infamous users of reverse proxies was Adrian Lamo, a hacker who deployed them to break into a variety of commercial systems. I met Lamo back in 2002, before he had been convicted of one attempt at the New York Times and before he became infamous for giving up Chelsea Manning as the source for leaking US government documents to Wikileaks. The researchers analyzed 13 different versions of three phishing toolkits and created fingerprints for the web traffic that goes through these tools. This fingerprint was encoded into a testing program they called PHOCA and made available on GitHub for other researchers to try out. Starting in March 2020, they ran various phishing sites through PHOCA for a year and found that 1,220 sites were using phishing toolkits that fit their profile. This represents a big jump in their popularity, which could be attributed to the fact that most of these phishing tools are free to download and easy to learn how to use (thanks to various online tutorial videos and hacking forums). The diagram of how PHOCA works is reproduced below from the research paper:
(Image credit: Catching Transparent Phish: Analyzing and Detecting MITM Phishing Toolkits) The phishing tools are also easy to deploy across a cloud hosting infrastructure, as they're both quick to setup and to remove. Half of the phishing domains were registered a week before the attacks were launched, and a third of these tools share a common IP address with some legitimate domain. Both of these methods make detection more difficult and show some insight into how the attackers work. One of the interesting results is that these phishing toolkits occupy a blind spot in phishing blocklists, with less than half of domains and a fifth of IP associated addresses found on these blocklists, including in one commercial collection. That means that better detection algorithms are needed to stop potential attacks, and one security vendor has already implemented the PHOCA rules in their own network scanners. “Phishing blocklist services must take a more proactive approach in discovering phishing content,” say the paper’s authors. First off is an interesting observation from the researchers: "The real-time traffic proxying of the MITM phishing toolkits that allows them to launch powerful phishing attacks also exposes them to fingerprinting that is not available for traditional phishing techniques." That certainly is a nice benefit. Second, websites should implement more robust countermeasures. One way to do this is to use a separate communication channel to complete the MFA logins. For instance, users could be sent a rendezvous URL through a second, secure communication channel, such as email. Finally, websites should make more of an effort to implement the FIDO Universal Two Factor protocols as a preferred MFA method. This will ultimately defeat the MITM and reverse proxy attacks. What are some key takeaways from this work?