If you like using apps that interoperate with the web without being implemented as browser apps, then this should be really good news in general. It gives you access to web standards without having to write your app in JavaScript and without having to implement it all yourself.
* Across the ecosystem of Rust applications, of course. Firefox is about the only C++ application I know of that directly depends on rust-url.
This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.
LLVM is a major block in this, but there’s only so much we can do - work on new backends is underway, which may help the pain. When Rust started, GCC was also not in a state that you would want to write a frontend for it the way you could do it for LLVM. I would love to see a rustc version backed by GCC or cranelift to come to a state that makes writing codegen backends easier (this is an explicit goal of the project).
“Bad citizen” implies that we don’t appreciate those problems, but we all got limited hands and improvements are gradual. Indeed, Rust has frequently been the driver behind new LLVM backends and has been driven LLVM to support targets outside of the Apple/Google/Mobile spectrum that LLVM is traditionally aimed for. It’s not like people that can write a good quality LLVM backend are easy to find and to motivate to do that on their free time. A lot of backends need vendor support to be properly implemented. We actively speak to those vendors, but hey, the week has a limited time of hours and it’s not like vendor negotiation is something you want to do in your free time either.
librsvg did weight these pros and cons and has decided that the modularity and ability to use third party packages is worth making the jump. These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.
This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.
I never said or implied that Rust’s developers or users have anything but good intentions. Bad citizen doesn’t mean that you don’t appreciate the problems, it means that the problems exist.
Like it or not, Rust doesn’t play well with system package management. Firefox, for example, requires the latest stable Rust to be compiled, which means that systems either need to upgrade their Rust regularly or not update Firefox regularly. Neither are good options. Upgrading Rust regularly means having to test and verify every Rust program in a distro’s repositories every 6 weeks, which becomes a bigger and bigger effort as more and more Rust packages are updated. What happens if one of them no longer works with newer stable Rusts? It’s not like Rust is committed to 100% backwards compatibility. And not upgrading Firefox means missing out on critical security fixes.
Rust needs a proper distinction between its package manager and its build system. Conflating them both into a Cargo-shaped amorphous blob is harmful.
LLVM is a major block in this, but there’s only so much we can do
Just don’t depend on LLVM in the first place. Most compiled languages have their own backends. It’s not like Rust has saved any real effort in the long term anyway, as they’re having to reimplement a whole lot of optimisation in MIR anyway to avoid generating so much LLVM bytecode.
These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.
Because rust would definitely support more CPU architectures if they had to build out the entire backend themselves. Just like DMD and the Go reference implementation, both of which lack support for architectures like m68k (the one that caused so much drama at Debian to begin with) and embedded stuff like avr and pic.
@milesrout could you point to some examples that back up your comments on librsvg being held back to the non-rust version on those platforms and perhaps some commentary / posts from package maintainers on the subject?
Not agreeing or disagreeing here, I just want to get some insight from package maintainers before I form an opinion on the subject.
On the topic of compile times I agree that LLVM is a blessing and curse.
Clang is my preferred C/C++ compiler, but I have noticed that compiling with -O2 or -O3 on projects even as small as ~10kloc takes substantial time when compared to a compiler such as TCC.
Sure the generated machine code is much more performant, but I do not think the run-time performance is always worth the build-time cost (for my use cases at least).
I haven’t written enough Rust to know how this translates over to rustc, but I imagine that a lot of the same slowness in optimizing of C++ template instantiations would appear in Rust generics.
FYI (I’m familiar with the internals there) what you want for quick compiles is to not run -O2 or -O3.
Clang (and LLVM) emphasise speed very much, but do include some very expensive analysis and transformation code, and some even more expensive sanity checks (I’ve seen one weeklong compile). If you want quick compiles, don’t run the passes that do the slow work. TCC doesn’t run such extra slow things. If you want fast output (or various other things) rather than quick compiles, then do run the appropriate extra passes.
I understand that -O2 and -O3 trade off compilation speed for additional optimization passes.
Perhaps TCC was actually bad example because it doesn’t have options for higher levels of optimization.
My issue is more that the actual time to perform such additional passes is extraordinarily high period, and that if one wants to make meaningful changes to performance sensitive code, the rebuild process leads to a less-than-stellar user experience.
The LLVM team has done amazing work that has lead to absolutely astounding performance in modern C/C++ code.
There are much smarter people than I who have evaluated the trade-offs involved when dealing with higher optimization levels, and I trust their judgement.
It is just that for me the huge bump in compilation time from -O0 or -O1 to -O2, -O3, or -Ofast is painful, and I wish there was an easier path to getting middle-of-the-road performance for release builds with good compilation times.
You have performance-sensitive code in .h files, so very many files need to be recompiled? BTDT and I feel your pain.
In my own compiler (not C++) I’m taking some care to minimise the problem by working on a more fine-grained level, to the degree that is possible, which is nonzero but…
One of the major LLVM teams uses a shared build farm: A file is generally only recompiled by one person. The rest of the team will just get the compiled output, because the shared cache maps the input file to the correct output. This makes a lot of sense to me — large programs are typically maintained by large teams, so the solution matches the problem quite well.
An important point that’s getting downplayed here is that this isn’t just a Firefox feature. It’s a library that you can pull into any Cargo-based project and that provides header files and bindings to C. We’ve already seen how this library-ification of “Firefox” features can help other projects when librsvg adopted Mozilla’s CSS implementation, and seeing the nearly ubiquitous* use of Servo-owned libraries like
url
.If you like using apps that interoperate with the web without being implemented as browser apps, then this should be really good news in general. It gives you access to web standards without having to write your app in JavaScript and without having to implement it all yourself.
* Across the ecosystem of Rust applications, of course. Firefox is about the only C++ application I know of that directly depends on rust-url.
[Comment from banned user removed]
This is a pretty uncharitable reading. Indeed, we work together with all package maintainers that actually get in touch with us and want help. We have very good experiences with the Debian folks, for example.
LLVM is a major block in this, but there’s only so much we can do - work on new backends is underway, which may help the pain. When Rust started, GCC was also not in a state that you would want to write a frontend for it the way you could do it for LLVM. I would love to see a rustc version backed by GCC or cranelift to come to a state that makes writing codegen backends easier (this is an explicit goal of the project).
“Bad citizen” implies that we don’t appreciate those problems, but we all got limited hands and improvements are gradual. Indeed, Rust has frequently been the driver behind new LLVM backends and has been driven LLVM to support targets outside of the Apple/Google/Mobile spectrum that LLVM is traditionally aimed for. It’s not like people that can write a good quality LLVM backend are easy to find and to motivate to do that on their free time. A lot of backends need vendor support to be properly implemented. We actively speak to those vendors, but hey, the week has a limited time of hours and it’s not like vendor negotiation is something you want to do in your free time either.
librsvg did weight these pros and cons and has decided that the modularity and ability to use third party packages is worth making the jump. These moves are important, because no one will put an effort behind porting a programming language without some form of pain. Projects like this become the motivation.
I never said or implied that Rust’s developers or users have anything but good intentions. Bad citizen doesn’t mean that you don’t appreciate the problems, it means that the problems exist.
Like it or not, Rust doesn’t play well with system package management. Firefox, for example, requires the latest stable Rust to be compiled, which means that systems either need to upgrade their Rust regularly or not update Firefox regularly. Neither are good options. Upgrading Rust regularly means having to test and verify every Rust program in a distro’s repositories every 6 weeks, which becomes a bigger and bigger effort as more and more Rust packages are updated. What happens if one of them no longer works with newer stable Rusts? It’s not like Rust is committed to 100% backwards compatibility. And not upgrading Firefox means missing out on critical security fixes.
Rust needs a proper distinction between its package manager and its build system. Conflating them both into a Cargo-shaped amorphous blob is harmful.
Just don’t depend on LLVM in the first place. Most compiled languages have their own backends. It’s not like Rust has saved any real effort in the long term anyway, as they’re having to reimplement a whole lot of optimisation in MIR anyway to avoid generating so much LLVM bytecode.
That’s just arrogant imo.
Because rust would definitely support more CPU architectures if they had to build out the entire backend themselves. Just like DMD and the Go reference implementation, both of which lack support for architectures like m68k (the one that caused so much drama at Debian to begin with) and embedded stuff like avr and pic.
I’d be more interested in a compile-to-C version, or a GCC version. Those might actually solve the problem.
@milesrout could you point to some examples that back up your comments on
librsvg
being held back to the non-rust version on those platforms and perhaps some commentary / posts from package maintainers on the subject? Not agreeing or disagreeing here, I just want to get some insight from package maintainers before I form an opinion on the subject.On the topic of compile times I agree that LLVM is a blessing and curse. Clang is my preferred C/C++ compiler, but I have noticed that compiling with
-O2
or-O3
on projects even as small as ~10kloc takes substantial time when compared to a compiler such as TCC. Sure the generated machine code is much more performant, but I do not think the run-time performance is always worth the build-time cost (for my use cases at least).I haven’t written enough Rust to know how this translates over to
rustc
, but I imagine that a lot of the same slowness in optimizing of C++ template instantiations would appear in Rust generics.FYI (I’m familiar with the internals there) what you want for quick compiles is to not run -O2 or -O3.
Clang (and LLVM) emphasise speed very much, but do include some very expensive analysis and transformation code, and some even more expensive sanity checks (I’ve seen one weeklong compile). If you want quick compiles, don’t run the passes that do the slow work. TCC doesn’t run such extra slow things. If you want fast output (or various other things) rather than quick compiles, then do run the appropriate extra passes.
If you want both, you’re out of luck, because doing over 30 kinds of analysis isn’t quick.
Hey @arnt, thanks for the reply.
I understand that
-O2
and-O3
trade off compilation speed for additional optimization passes. Perhaps TCC was actually bad example because it doesn’t have options for higher levels of optimization.My issue is more that the actual time to perform such additional passes is extraordinarily high period, and that if one wants to make meaningful changes to performance sensitive code, the rebuild process leads to a less-than-stellar user experience. The LLVM team has done amazing work that has lead to absolutely astounding performance in modern C/C++ code. There are much smarter people than I who have evaluated the trade-offs involved when dealing with higher optimization levels, and I trust their judgement. It is just that for me the huge bump in compilation time from
-O0
or-O1
to-O2
,-O3
, or-Ofast
is painful, and I wish there was an easier path to getting middle-of-the-road performance for release builds with good compilation times.You have performance-sensitive code in .h files, so very many files need to be recompiled? BTDT and I feel your pain.
In my own compiler (not C++) I’m taking some care to minimise the problem by working on a more fine-grained level, to the degree that is possible, which is nonzero but…
One of the major LLVM teams uses a shared build farm: A file is generally only recompiled by one person. The rest of the team will just get the compiled output, because the shared cache maps the input file to the correct output. This makes a lot of sense to me — large programs are typically maintained by large teams, so the solution matches the problem quite well.