There’s a similar tool called watchexec that I’ve used for a little while. watchexec works on macOS, Windows, and Linux, whereas I think entr only works on Linux (as far as I can tell).
I find this interesting because this paper is from 1992, but it basically describes what today we call “offline first”. While it might be obvious that offline first is not a recent invention, it’s quite cool to see the use cases described on the paper and how they resonate with today actual offline first marketing.
I think an “offline-first” mentally actually made more sense in 1992 than it does today. With ubiquitous LTE and wifi on airplanes and trains, I’m not sure “offline-first” design is justified most of the time. “We might occasionally see WAN partitions between DCs” is not at all the same as “offline-first”, as the paper points out.
I found this post to be annoyingly smug. Considering cache and VM behavior in algorithm design is obviously a good thing—but far from being ignored by academic CS, there are entire sub-fields dedicated to cache-oblivious data structures, and more generally to adapting data structures and system designs to the properties of modern hardware. The “conceptual model of a computer” he presents is appropriate for CS101, but any reasonable CS education will go into far more depth about how modern hardware actually works, and what that implies for software design.
the most horrifying part of this blog entry:
“Unless libssl was compiled with the OPENSSL_NO_BUF_FREELISTS option (it wasn’t), libssl will maintain its own freelist, rendering any possible mitigation strategy performed by malloc useless. Yes, OpenSSL includes its own builtin exploit mitigation mitigation.”
it terrifies me that we rely so heavily on openssl.
An application doing its own memory management on top of malloc() does not seem horrifying to me. (Not that I would argue that OpenSSL is non-horrifying in general.)
the question is why it feels it’s necessary. i don’t disagree fundamentally, but if the logic in openssl is that malloc was slow on some ancient version of VMS or something, then they’ve hobbled effort at improving security with no gain in value.
malloc has almost universally been slower than application-specific allocators for decades now on nearly every platform. Usually the difference isn’t big enough to be worth it, but often it is. Some current mallocs, such as my friend Jason’s jemalloc, are so fast that only very rare applications benefit from application-specific allocators, but they still aren’t in wide use. SSL implementations have been struggling to improve efficiency ever since SSL was introduced (to the point that there are multiple brands of hardware accelerators for SSL on the market) because if SSL is slow then people will switch to unencrypted HTTP.
I don’t know of anybody who’s using a modified malloc in production to improve security, although people have been using modified compilers that slow down function call and return in production for ten or fifteen years in order to improve security. @tedu’s writeup seems to show that using a modified malloc would not have prevented Heartbleed.
So, although I’m not very happy with OpenSSL, it seems to me that they made the right tradeoff in this case.
(Disclaimer: although I don’t have a personal connection with any of the members of the OpenSSL team, I did dance tango once with Kurt Roeckx, who is now no longer responsible for history’s biggest security bug, because Heartbleed is now worse.)
There’s a similar tool called watchexec that I’ve used for a little while.
watchexec
works on macOS, Windows, and Linux, whereas I thinkentr
only works on Linux (as far as I can tell).entr
works on BSD, mac OSX, and Linux.watchexec
looks cool too, though.I saw a comment from the watchexec author comparing watchevec with entr: https://news.ycombinator.com/item?id=23701078
It seems you can build from source on various platforms, and for macOS I also see a homebrew formula, but I haven’t tried it yet.
I find this interesting because this paper is from 1992, but it basically describes what today we call “offline first”. While it might be obvious that offline first is not a recent invention, it’s quite cool to see the use cases described on the paper and how they resonate with today actual offline first marketing.
I think an “offline-first” mentally actually made more sense in 1992 than it does today. With ubiquitous LTE and wifi on airplanes and trains, I’m not sure “offline-first” design is justified most of the time. “We might occasionally see WAN partitions between DCs” is not at all the same as “offline-first”, as the paper points out.
I found this post to be annoyingly smug. Considering cache and VM behavior in algorithm design is obviously a good thing—but far from being ignored by academic CS, there are entire sub-fields dedicated to cache-oblivious data structures, and more generally to adapting data structures and system designs to the properties of modern hardware. The “conceptual model of a computer” he presents is appropriate for CS101, but any reasonable CS education will go into far more depth about how modern hardware actually works, and what that implies for software design.
the most horrifying part of this blog entry: “Unless libssl was compiled with the OPENSSL_NO_BUF_FREELISTS option (it wasn’t), libssl will maintain its own freelist, rendering any possible mitigation strategy performed by malloc useless. Yes, OpenSSL includes its own builtin exploit mitigation mitigation.”
it terrifies me that we rely so heavily on openssl.
An application doing its own memory management on top of malloc() does not seem horrifying to me. (Not that I would argue that OpenSSL is non-horrifying in general.)
the question is why it feels it’s necessary. i don’t disagree fundamentally, but if the logic in openssl is that malloc was slow on some ancient version of VMS or something, then they’ve hobbled effort at improving security with no gain in value.
(not to mention that @tedu found that the alternate codepath is clearly broken and untested. sigh.)
malloc
has almost universally been slower than application-specific allocators for decades now on nearly every platform. Usually the difference isn’t big enough to be worth it, but often it is. Some current mallocs, such as my friend Jason’s jemalloc, are so fast that only very rare applications benefit from application-specific allocators, but they still aren’t in wide use. SSL implementations have been struggling to improve efficiency ever since SSL was introduced (to the point that there are multiple brands of hardware accelerators for SSL on the market) because if SSL is slow then people will switch to unencrypted HTTP.I don’t know of anybody who’s using a modified malloc in production to improve security, although people have been using modified compilers that slow down function call and return in production for ten or fifteen years in order to improve security. @tedu’s writeup seems to show that using a modified malloc would not have prevented Heartbleed.
So, although I’m not very happy with OpenSSL, it seems to me that they made the right tradeoff in this case.
(Disclaimer: although I don’t have a personal connection with any of the members of the OpenSSL team, I did dance tango once with Kurt Roeckx, who is now no longer responsible for history’s biggest security bug, because Heartbleed is now worse.)
Theo de Raadt disagrees with me. OpenBSD uses a modified malloc in production to improve security, and has for years. I should have known that.