Honestly based on the article it feels like a meme for the title and the opening sentence, not actually something they were planning on doing.
On the face of it, it seems silly … I can’t imagine working in any production situation where the solution to performance problems is to rewrite the whole codebase in a different language, rather than … profile the code and optimize it, which is what the article is about.
Optimizing memory usage is an extremely old and familiar problem, especially because Go gives you a memory model like garbage collected C, so all your intuition from that language carries over. It’s basically like reducing the number of allocations and memory usage in a C or C++ program. And they’re even using pprof which Go borrowed from Google’s C++ tools.
It seems like the Rust team is tired of this meme too.
Something closely related I ran into several years back: we were caching a response from a network server (a []byte) in a local LRU cache. The network client knew the size of the response it was going to read, but it was using ioutil.ReadAll with an io.LimitReader to read the data instead of using io.ReadFull with an appropriately sized slice. Since ReadAll doesn’t know how much data it’s going to have to read, it uses append to grow the slice, and append does the classical exponential growth thing so that it doesn’t spend too much time reallocating.
The problem: when you retain a []byte, you retain its underlying array as well, and because of append, all of those underlying arrays were oversized by up to 100%, so the cache was using more memory than it needed to for the data it was holding. Fixing the client let us cache more data in the same amount of real RAM.
Copying the data out of the []byte the client gave us into a right-sized []byte would have worked too, but patching the client was fortunately easy, and less allocation and less copying is more better. :)
The premise is somewhat analogous to not just switching distro when your new laptop hardware isn’t properly covered by the one you’ve favoured the past few years ;)
Good for them for putting the work in to understand and solve a problem, rather than just blindly Doing Something Cool that people claim will help.
Honestly based on the article it feels like a meme for the title and the opening sentence, not actually something they were planning on doing.
On the face of it, it seems silly … I can’t imagine working in any production situation where the solution to performance problems is to rewrite the whole codebase in a different language, rather than … profile the code and optimize it, which is what the article is about.
Optimizing memory usage is an extremely old and familiar problem, especially because Go gives you a memory model like garbage collected C, so all your intuition from that language carries over. It’s basically like reducing the number of allocations and memory usage in a C or C++ program. And they’re even using pprof which Go borrowed from Google’s C++ tools.
It seems like the Rust team is tired of this meme too.
I agree, lately I had been feeling like a lot of the “Let’s rewrite it in Rust” is just creating a fog of hype more than anything.
Something closely related I ran into several years back: we were caching a response from a network server (a
[]byte
) in a local LRU cache. The network client knew the size of the response it was going to read, but it was usingioutil.ReadAll
with anio.LimitReader
to read the data instead of usingio.ReadFull
with an appropriately sized slice. SinceReadAll
doesn’t know how much data it’s going to have to read, it usesappend
to grow the slice, andappend
does the classical exponential growth thing so that it doesn’t spend too much time reallocating.The problem: when you retain a
[]byte
, you retain its underlying array as well, and because ofappend
, all of those underlying arrays were oversized by up to 100%, so the cache was using more memory than it needed to for the data it was holding. Fixing the client let us cache more data in the same amount of real RAM.Copying the data out of the
[]byte
the client gave us into a right-sized[]byte
would have worked too, but patching the client was fortunately easy, and less allocation and less copying is more better. :)I should have said, when you retain any slice, you retain its (entire) underlying array. It’s not specific to byte slices in any way :)
Thanks, love this kind of writeup!
The premise is somewhat analogous to not just switching distro when your new laptop hardware isn’t properly covered by the one you’ve favoured the past few years ;)