A somewhat simple security control bypass in Apache Batik’s DefaultScriptSecurity
and DefaultExternalResourceSecurity
controls. Where Batik has to be able to load SVG files (and associated resources) from either a local or remote source, it makes for an interesting attack for SSRF and/or RCE. Of course, it has the earlier mentioned security controls in place to prevent an attacker from just loading a resource remotely from their own server. If you load a local SVG file, it can load local scripts and resources but not remote ones. If you load a remote SVG file, you can load scripts over HTTP or any other supported protocol, but only from the same host. The problem is, DefaultScriptSecurity
and DefaultExternalResourceSecurity
relied on Java’s getHost()
method to check if the document and script URLs matched.
One of the supported protocols is Java Archives (JARs). Unfortunately, getHost()
will always return null for JAR URLs. In order to resolve the host properly, you have to call getFile()
and call getHost()
on the returned object from getFile()
. This essentially gives an attacker the ability to load JAR files from an arbitrary remote host. This can be used in the form of an SSRF to send requests or NTLM relays to an attacker server, or RCE.
RCE was a bit interesting as the straight-forward route of just providing a malicious Java class was blocked due to a bug in Batik’s script type allow list. But regardless, ECMAScript was supported via Mozilla’s Rhino, which has a known code execution vector by abusing string concetenation to an eval()
and using a string()
constructor that uses java.lang.Runtime.getRuntime().exec()
to run an arbitrary shell command.
A cool look at finding a vulnerability on a statically generated website, due to the presence of an image optimizer running as a serverless function. The Netlify IPX would normally validate image urls before fetching them to ensure the host is whitelisted (none by default), however this whitelist is skipped when it believe the URL is a local URL, meaning it does not start with http
. The vulnerability is that the when fetching one of these local URLs the protocol would be prepended to the URL, and this protocol could be attacker controlled through the x-forwarded-proto
header, and would be appended without any validation. Allowing a proto like https://attacker.com?
to be used to get an SSRF.
As it is an image cache/optimizer, the filetype did need to be an image, but this could still be abused with an SVG which can contain JavaScript to run on any victim, as the x-forwarded-proto
didn’t influence the cache key any following user requesting the same image would be served the cached one as long as the cache was active.
The post ends with a somewhat less interesting (in my opinion) issue in GatsbyJS where a full read SSRF could be obtained through a similar file (not only image) proxy mechanism, however this could only be accessed if the Gatsby server was actually running instead of building the site.
The problem starts in remove_liquidity
where a contract can remove funds that they added. It will updated the total_supply
and burn tokens, then in a loop for each coin it will decrement the balances
and transfer them to the attacker’s contract. This is where control of execute goes back to the attacker and their fallback
method and the contract’s state is somewhat inconsistent. total_supply
has been decremented, but not all of the balance
values have yet been updated.
This is where get_virtual_price
comes into play. This is an @external
(callable by other contracts) @view
(doesn’t change any state) function, and as a view, has no reentrancy guard. This function calculates the price of the LP’s tokens based on the balances and total supply, leading to an incorrect calculation has the balances have not yet been fully updated. So any other protocol that depends on this function and trusts it could be manipulated.
As the title says, some weird load balancers issues, core problem being that user-specific data would be cached and returned to other users.
They detail four cases of this, but they are all largely the same and somewhat random. Cache entry expires, and another users details get cached and start appearing. The last vulnerability was the most interesting as it was a JavaScript file that would call a function to set the users Authorization
header. This page would be cached using the loc
parameter as a key, so an attacker could craft a page with an arbitrary loc
parameter, send it to a victim and get the page cached with their victims authorization headers.
Funny bug in Task.org, which is an open source reminder and todo list tracking app. The vulnerability is lack of path validation in the ShareLinkActivity’s share
intent. The activity will accept arbitrary paths intended as “attachment files”, which will copy the file into the app’s external storage directory. An attacker can provide a path to Task.org internal storage files (such as the user local database or preference files) and copy them to the publicly accessible external storage. It’s possible the database can contain credentials for CalDAV integration if it’s enabled, though passwords are encrypted, mitigating the impact.
When performing a BulkImport it is possible to provide a URL tohttpUrlToRepo
that will resolve to a repository on the local filesystem.
Althought GitLab::UrlBlocker.Validate
is used to validate the URL provided, no whitelist of schemas are provided, meaning the file://
schema can be used, as long as the rest of the URL passes validation. This can allow the attacker to important any repository already on the host filesystem, taking advantage of the fact that GitLab project storage paths are based on the SHA2 of the project ID they can determine the location of a given project on the filesystem.
For this report they targeted the GitLab Capture the Flag repo, a special repository containing a flag that can be capture to prove access to data for bypass vulnerabilities that would other-wise score a low CVE. I believe this is the first time the $20,000 bonus for capturing that flag has been claimed.
When performing a BulkImport it is possible to provide a URL tohttpUrlToRepo
that will resolve to a repository on the local filesystem.
Althought GitLab::UrlBlocker.Validate
is used to validate the URL provided, no whitelist of schemas are provided, meaning the file://
schema can be used, as long as the rest of the URL passes validation. This can allow the attacker to important any repository already on the host filesystem, taking advantage of the fact that GitLab project storage paths are based on the SHA2 of the project ID they can determine the location of a given project on the filesystem.
For this report they targeted the GitLab Capture the Flag repo, a special repository containing a flag that can be capture to prove access to data for bypass vulnerabilities that would other-wise score a low CVE. I believe this is the first time the $20,000 bonus for capturing that flag has been claimed.