“I re-ran Coverity after disabling OpenSSL’s custom freelist and also hacking CRYPTO_malloc() and friends to just directly call the obvious function from the malloc family. This caused Coverity to report 173 new defects: mostly use-after-free and resource leaks. Heartbleed wasn’t in the list, however, so I stand by my guess (above) that perhaps something related to indirection caused this defect to not be ranked highly enough to be reported.”
the most horrifying part of this blog entry:
“Unless libssl was compiled with the OPENSSL_NO_BUF_FREELISTS option (it wasn’t), libssl will maintain its own freelist, rendering any possible mitigation strategy performed by malloc useless. Yes, OpenSSL includes its own builtin exploit mitigation mitigation.”
it terrifies me that we rely so heavily on openssl.
An application doing its own memory management on top of malloc() does not seem horrifying to me. (Not that I would argue that OpenSSL is non-horrifying in general.)
the question is why it feels it’s necessary. i don’t disagree fundamentally, but if the logic in openssl is that malloc was slow on some ancient version of VMS or something, then they’ve hobbled effort at improving security with no gain in value.
malloc has almost universally been slower than application-specific allocators for decades now on nearly every platform. Usually the difference isn’t big enough to be worth it, but often it is. Some current mallocs, such as my friend Jason’s jemalloc, are so fast that only very rare applications benefit from application-specific allocators, but they still aren’t in wide use. SSL implementations have been struggling to improve efficiency ever since SSL was introduced (to the point that there are multiple brands of hardware accelerators for SSL on the market) because if SSL is slow then people will switch to unencrypted HTTP.
I don’t know of anybody who’s using a modified malloc in production to improve security, although people have been using modified compilers that slow down function call and return in production for ten or fifteen years in order to improve security. @tedu’s writeup seems to show that using a modified malloc would not have prevented Heartbleed.
So, although I’m not very happy with OpenSSL, it seems to me that they made the right tradeoff in this case.
(Disclaimer: although I don’t have a personal connection with any of the members of the OpenSSL team, I did dance tango once with Kurt Roeckx, who is now no longer responsible for history’s biggest security bug, because Heartbleed is now worse.)
that’s a bit sad. i’ll pour a forty out. thanks to miod for pretty tirelessly keeping a lot of these old things running years and years beyond when anyone else would have. :)
and red hat still hasn’t given anything. sigh. time for more internal kvetching.
Does RH patch OpenSSH? I would have thought they did.
They have sent patches (44 by my count) in.. but I think OCs grip is about giving money.
Well, aren’t patches also a valid form of contribution? The money would have gone into making patches, wouldn’t it?
“I re-ran Coverity after disabling OpenSSL’s custom freelist and also hacking CRYPTO_malloc() and friends to just directly call the obvious function from the malloc family. This caused Coverity to report 173 new defects: mostly use-after-free and resource leaks. Heartbleed wasn’t in the list, however, so I stand by my guess (above) that perhaps something related to indirection caused this defect to not be ranked highly enough to be reported.”
tip of the ice berg. wag of the finger.
the most horrifying part of this blog entry: “Unless libssl was compiled with the OPENSSL_NO_BUF_FREELISTS option (it wasn’t), libssl will maintain its own freelist, rendering any possible mitigation strategy performed by malloc useless. Yes, OpenSSL includes its own builtin exploit mitigation mitigation.”
it terrifies me that we rely so heavily on openssl.
An application doing its own memory management on top of malloc() does not seem horrifying to me. (Not that I would argue that OpenSSL is non-horrifying in general.)
the question is why it feels it’s necessary. i don’t disagree fundamentally, but if the logic in openssl is that malloc was slow on some ancient version of VMS or something, then they’ve hobbled effort at improving security with no gain in value.
(not to mention that @tedu found that the alternate codepath is clearly broken and untested. sigh.)
malloc
has almost universally been slower than application-specific allocators for decades now on nearly every platform. Usually the difference isn’t big enough to be worth it, but often it is. Some current mallocs, such as my friend Jason’s jemalloc, are so fast that only very rare applications benefit from application-specific allocators, but they still aren’t in wide use. SSL implementations have been struggling to improve efficiency ever since SSL was introduced (to the point that there are multiple brands of hardware accelerators for SSL on the market) because if SSL is slow then people will switch to unencrypted HTTP.I don’t know of anybody who’s using a modified malloc in production to improve security, although people have been using modified compilers that slow down function call and return in production for ten or fifteen years in order to improve security. @tedu’s writeup seems to show that using a modified malloc would not have prevented Heartbleed.
So, although I’m not very happy with OpenSSL, it seems to me that they made the right tradeoff in this case.
(Disclaimer: although I don’t have a personal connection with any of the members of the OpenSSL team, I did dance tango once with Kurt Roeckx, who is now no longer responsible for history’s biggest security bug, because Heartbleed is now worse.)
Theo de Raadt disagrees with me. OpenBSD uses a modified malloc in production to improve security, and has for years. I should have known that.
neat! would be slightly disappointing if that was the actual cover though. :(
that’s a bit sad. i’ll pour a forty out. thanks to miod for pretty tirelessly keeping a lot of these old things running years and years beyond when anyone else would have. :)
adding xhci support wouldn’t be an awful project.
That is already being worked on by mpi@
Is that full xhci support at USB 3.0 speeds, or only the USB 2.0 backwards-compatible one like in NetBSD?
I’m not sure, I just remember seeing mpi say on ICB that he had something basic working already that could attach devices.
spiffy!