Really cool research. One thing I’ve noticed with these types of exploits is that they’re heavily dependent on systems having the same exact version of the package. Offsets may change on systems that have differing versions of packages, even if those packages still contain the vulnerable code. Exploit mitigations like ASLR would prevent large-scale reuse of exploits, since the attacker cannot guarantee that each system is at the exact same patch level. Additionally, even if the same version is deployed across different distributions, the exploit must target a specific distribution.
So, for the exploit to be successful, two conditions must be met:
So, even though exploit mitigations like ASLR were bypassed with this particular exploit, it’s still not useless. It limits the attacker’s ability to exploit systems en masse.
How much variation is there between different implementations of ASLR (incl. malloc randomization)?
The article mentions strong heap consistency, where the overflown buffer and another object are located in a predictable arrangement with respect to each other. The article calls this “decent 64-bit ASLR.” Is it the case that all practical ASLR implementations won’t do enough to shuffle around such a pair of small allocations in a fresh process? Would OpenBSD fare better here?
Chris is on a roll against gstreamer! This exploit was ridiculous, mad props. I wonder how reliable it is.