Always good to get more confirmation that Rust can go toe to toe with C, but without an analysis of why I assume it comes down to some minor algorithmic difference or trickery that the C implementation just lacks, rather than an artifact of the language design.
Not sure what you would like to see (dis)proven here, with enough man-hours spent optimizing you can find the optimal machine code with either language.
A more interesting question is how much of that time is wasted on sidequests like eliminating UB or bound checks. The differentiator is rarely the theoretical max performance of C/Rust, but the performance you can achieve with a certain amount of work. And that’s clearly an artifact of language design.
Once you learn how to work with ownership+borrowing, it’s not a big deal.
There is a lot of headbanging and frustration in the beginning when learning Rust, especially for C users who instinctively try to use Rust references as they would use pointers, but that goes away with practice.
All languages have side quests like this. They differ in the skill/time required to diagnose/fix them, in how often they come up, in how far they take you from the direct path… It’s hard to make a definitive assessment and everybody’s adventure will be different, but I think more and more people see the advantages of Rust.
Yes but nearly all claims of “faster than C” come down to comparing two different algorithms rather than a language difference. A disproof of this would be something like being able to attribute the difference to better codegen that was allowed due to Rust’s aliasing semantics, but it’s almost never that.
It’s almost never that because there’s almost never a difference in codegen, and when there is one it’s tiny.
But focusing on that is IMHO missing the point of these stories: that using a safer language doesn’t mean you have to give up performance, and that given finite resources you might even get better performance with Rust because the better algorithms take more time / skill / maintenance to get right with C than Rust.
I think the thing being done here is not pointless navel-gazing about which language is faster. It’s just getting rid of the “but C is faster and my application is performance-intensive above all other concerns so I should use C” argument. Choosing a language based on presumed performance alone is already a bad judgement call, but at least that bad judgement call will err on the side of memory safety now.
It doesn’t dispel anything unless the algorithms are actually the same, that’s the point. If you write a bad algorithm in C, and I write a better one in Python and get better performance, that’s not evidence Python is faster than C.
The relative difficulty of making the code faster matters. With enough coercing almost any compiled language can be made to be almost arbitrarily fast. But keeping the code maintainable and not spending forever on it as well? That’s a tougher nut to crack.
, but at least that bad judgement call will err on the side of memory safety now.
Prediction is hard. As a proof one could do better than than an ancient C impl without memory unsafety, Nim had Zippy almost 5 years ago. Deflate/inflate have been very slow/far from state of the art almost since their inception. Their popularity (or the uptake of any given implementation), like popularity of anything, is not so easy to predict. Personally, I have preferred either pixz or zstd for multiple decades and was very grateful when kernel.org finally started distributing .xz tar balls that could be decompressed in parallel which is only very recently. So, we’ll see what happens.
Always good to get more confirmation that Rust can go toe to toe with C, but without an analysis of why I assume it comes down to some minor algorithmic difference or trickery that the C implementation just lacks, rather than an artifact of the language design.
Not sure what you would like to see (dis)proven here, with enough man-hours spent optimizing you can find the optimal machine code with either language.
A more interesting question is how much of that time is wasted on sidequests like eliminating UB or bound checks. The differentiator is rarely the theoretical max performance of C/Rust, but the performance you can achieve with a certain amount of work. And that’s clearly an artifact of language design.
Or side quests such as convincing the borrow checker that you do, in fact, know what you’re doing?
At least you know when you’re done with that side quest, unlike finding the UB you missed…
Once you learn how to work with ownership+borrowing, it’s not a big deal.
There is a lot of headbanging and frustration in the beginning when learning Rust, especially for C users who instinctively try to use Rust references as they would use pointers, but that goes away with practice.
All languages have side quests like this. They differ in the skill/time required to diagnose/fix them, in how often they come up, in how far they take you from the direct path… It’s hard to make a definitive assessment and everybody’s adventure will be different, but I think more and more people see the advantages of Rust.
Yes but nearly all claims of “faster than C” come down to comparing two different algorithms rather than a language difference. A disproof of this would be something like being able to attribute the difference to better codegen that was allowed due to Rust’s aliasing semantics, but it’s almost never that.
It’s almost never that because there’s almost never a difference in codegen, and when there is one it’s tiny.
But focusing on that is IMHO missing the point of these stories: that using a safer language doesn’t mean you have to give up performance, and that given finite resources you might even get better performance with Rust because the better algorithms take more time / skill / maintenance to get right with C than Rust.
I think the thing being done here is not pointless navel-gazing about which language is faster. It’s just getting rid of the “but C is faster and my application is performance-intensive above all other concerns so I should use C” argument. Choosing a language based on presumed performance alone is already a bad judgement call, but at least that bad judgement call will err on the side of memory safety now.
It doesn’t dispel anything unless the algorithms are actually the same, that’s the point. If you write a bad algorithm in C, and I write a better one in Python and get better performance, that’s not evidence Python is faster than C.
The relative difficulty of making the code faster matters. With enough coercing almost any compiled language can be made to be almost arbitrarily fast. But keeping the code maintainable and not spending forever on it as well? That’s a tougher nut to crack.
Prediction is hard. As a proof one could do better than than an ancient C impl without memory unsafety, Nim had Zippy almost 5 years ago. Deflate/inflate have been very slow/far from state of the art almost since their inception. Their popularity (or the uptake of any given implementation), like popularity of anything, is not so easy to predict. Personally, I have preferred either
pixzorzstdfor multiple decades and was very grateful whenkernel.orgfinally started distributing.xztar balls that could be decompressed in parallel which is only very recently. So, we’ll see what happens.I don’t think it’s the popularity of zlib-rs that matters here.
How does it compares to https://github.com/ebiggers/libdeflate?
Judging by the existing libdeflate vs zlib-ng benchmarks, it’s likely that libdeflate is also faster than zlib-rs.
But the big caveat is that libdeflate doesn’t have a streaming API, limiting the usecases and/or pushing the overhead to other components.
[Comment removed by author]