A 19-year-old bug in XNUs Data Link Interface Layer or DLIL that lead to an out-of-bounds write on the heap. The root cause is that ifnet_attach()
will get the next interface index
as a 32-bit integer and downcast it to a uint16_t
when saving the index. If you can create enough interfaces to overflow past 0xFFFF
, the truncation will cause the interface index
to wrap back to zero when saved. This is problematic as ifnet_attach
uses this saved index
to get the respective ifnet_addrs
and ifindex2ifnet
entries, and ifnet_addrs
is fetched as ifnet_addrs[ifp->if_index - 1]
. This will of course access out-of-bounds memory for performing writes used in ifnet_attach()
later on.
Exploiting this requires root to be able to create the interfaces. There are also other challenges in exploiting this, as since the backing array will grow fairly large, your target object to overwrite also has to be large and allocated into the same KMEM_RANGE
. One object looked promising being the newofiles
array in fdalloc()
for file descriptor allocation, but all that could be hit was the file flags. You’d also need to open about 300k file descriptors, which again would require root to bypass the file user limit.
The author states this might be exploitable in some way, but they decided to stop before going any further.
A heap overflow that was found in-the-wild by Google’s Threat Analysis Group (TAG) in Chrome. This bug was in the texture subsystem for webGL GLES with textures created from a shared image, which bypasses the texture manager’s tracking of the max_levels
for mipmaps.
Under normal circumstances, the texture’s max_levels
is computed internally when initializing the level_infos
vector, and anything accessing the mipmap will first call TextureManager::ValidForTarget()
to ensure the level
is in-bounds. Where shared image textures bypass the texture manager though, it manually sets max_levels = 1
. Since TextureManager::ValidForTarget()
isn’t aware of this, it’s possible for methods to access level_infos
out-of-bounds.
Out of bounds read in cmark-gfm
due to a lack of bounds check in validate_protocol
.
You’ve got the validate_protocol
function which is intended to confirm that the data in the markdown string has the same prefix. You’d pass in a the expected protocol, the data, and an offset (called rewind
) which gives the offset to start the comparison at. It iterates backwards from the offset comparing each character of the protocol to the character in data
. If the protocol
string is longer than the number of characters in the data
buffer before the :
character triggering the check, the code will simply continue comparing characters out of bounds.
Giving an out of bounds read, althought the exploitability is fairly unlikely, it provides out of bounds read of the heap metadata stored before the buffer, but as the function simply returns false unless that byte happens to match the expected character its not leaking much useful information or doing anything that could be reasonably exploited.
Its still a weird bug and some non-intuitive code.