Malicious Python Package Targets macOS Developers To Access Their GCP Accounts
2024-7-26 19:32:0 Author: checkmarx.com(查看原文) 阅读量:3 收藏

In a recent investigation, we discovered that the Python package, “lr-utils-lib”, contained hidden malicious code. The code, activated upon installation, targets macOS systems and attempts to steal Google Cloud Platform credentials by sending them to a remote server. Additionally, we discovered a link to a fake LinkedIn profile for “Lucid Zenith,” who falsely claimed to be the CEO of Apex Companies, LLC, indicating possible social engineering tactics. Alarmingly, AI search engines, like Perplexity, inconsistently verified this false information, highlighting significant cybersecurity challenges in the digital age.

Key Points

  • A package called “lr-utils-lib” was uploaded to PyPi in early June 2024, containing malicious code that executes automatically upon installation.
  • The malware uses a list of predefined hashes to target specific macOS machines and attempts to harvest Google Cloud authentication data.
  • The harvested credentials are sent to a remote server.

Attack Flow

The malicious code is located within the setup.py file of the python package, which allows it to execute automatically upon installation.

This is the simplified code version, as the original was obfuscated.

Upon activation, the malware first verifies that it’s operating on a macOS system, its primary target. It then proceeds to retrieve the IOPlatformUUID of the Mac device (a unique identifier) and hashes it using the SHA-256 algorithm.

This resulting hash is then compared against a predefined list of 64 MAC UUID hashes, indicating a highly targeted attack strategy, and suggesting the attackers have prior knowledge of their intended victims’ systems.

If a match is found in the hash list, the malware’s data exfiltration process begins. It attempts to access two critical files within the ~/.config/gcloud directory: application_default_credentials.json and credentials.db. These files typically contain sensitive Google Cloud authentication data. The malware then attempts to transmit the contents of these files via HTTPS POST requests to a remote server identified as europe-west2-workload-422915[.]cloudfunctions[.]net.

This data exfiltration, if successful, could provide the attackers with unauthorized access to the victim’s Google Cloud resources.

CEO Impersonation

The social engineering aspect of this attack, while not definitively linked to the malware itself, presents an interesting dimension. A LinkedIn profile was discovered under the name “Lucid Zenith”, matching the name of the package owner. This profile falsely claims that Lucid Zenith is the CEO of Apex Companies, LLC. The existence of this profile raises questions about potential social engineering tactics that could be employed alongside the malware.

We queried various AI-powered search engines and chatbots to learn more about Lucid Zenith’s position. What we found was a variety of inconsistent responses. One AI-powered search engine, “Perplexity”, incorrectly confirmed the false information, without mentioning the real CEO.

This response was pretty consistent even with various phrasings of the question.

This was quite shocking since the AI-powered search engine could have easily confirmed the fact by checking the official company page, or even noticing that there were two LinkedIn profiles claiming the same title.

Other AI platforms, to their credit, when repeatedly questioned about Lucid Zenith’s role, correctly stated that he was not the CEO and provided the name of the actual CEO. This discrepancy underscores the variability in AI-generated responses and the potential risks of over-relying on a single AI source for verification. It serves as a reminder that AI systems can sometimes propagate incorrect information, highlighting the importance of cross-referencing multiple sources and maintaining a critical approach when using AI-powered tools for information gathering. Whether this manipulation was deliberate by the attacker, highlights a vulnerability in the current state of AI-powered information retrieval and verification systems that nefarious actors could potentially use to their advantage, for instance enhancing credibility and delivery of malicious packages.

Why does this matter? It shows the possible ways that social engineering attacks can complement technical exploits, like the malicious “lr-util-lib” package.

Conclusion

The analysis of the malicious “lr-utils-lib” Python package, reveals a deliberate attempt to harvest and exfiltrate Google Cloud credentials from macOS users. This behavior underscores the critical need for rigorous security practices when using third-party packages. Users should ensure they are installing packages from trusted sources and verify the contents of the setup scripts. The associated fake LinkedIn profile and inconsistent handling of this false information by AI-powered search engines highlight broader cybersecurity concerns. This incident serves as a reminder of the limitations of AI-powered tools for information verification, drawing parallels to issues like package hallucinations. It underscores the critical need for strict vetting processes, multi-source verification, and fostering a culture of critical thinking.

While it is not clear whether this attack targeted individuals or enterprises, these kinds of attacks can significantly impact enterprises. While the initial compromise usually occurs on an individual developer’s machine, the implications for enterprises can be substantial. For instance, if a developer within an enterprise unknowingly uses a compromised package, it could introduce vulnerabilities into the company’s software projects. This could lead to unauthorized access, data breaches, and other security issues, affecting the organization’s cybersecurity posture and potentially causing financial and reputational damage.

As part of the Checkmarx Supply Chain Security solution, our research team continuously monitors suspicious activities in the open-source software ecosystem. We track and flag “signals” that may indicate foul play and promptly alert our customers to help protect them.

PACKAGES

  • lr-utils-lib

IOC

  • europe-west2-workload-422915[.]cloudfunctions[.]net
  • lucid[.]zeniths[.]0j@icloud[.]com

文章来源: https://checkmarx.com/blog/malicious-python-package-targets-macos-developers-to-access-their-gcp-accounts/
如有侵权请联系:admin#unsafe.sh