This is honestly the only thing that’s been holding me back from making anything in rust. Now that it’s going into GCC there’s probably going to be a spec and hopefully slower and more stabler development. I don’t know what’s going to come after rust but I can’t find much of a reason to not jump ship from C++ anymore.
I doubt a new GCC frontend will be the reason a spec emerges. I would expect a spec to result from the needs of the safety and certification industry (and there already are efforts in that direction: https://ferrous-systems.com/blog/ferrocene-language-specification/ ) instead.
Thanks for highlighting that. We’re well on track to hit the committed release date (we’re in final polish, mainly making sure that the writing can be contributed to).
As per usual, slower and more stable development can be experienced but using the version of rust in your OS instead of whatever bleeding edge version upstream is shipping…
Let’s be honest, Rust uses evergreen policy, the ecosystem and tooling follows it, and fighting it is needless pain.
I still recommend to update the compiler regularly. HOWEVER, you don’t have to read the release notes. Just ignore whatever they say, and continue writing the code the way you used to. Rust keeps backwards compatibility.
Also, I’d like to highlight that release cadence has very little to do with speed of language evolution or its stability. Rust features still take years to develop, and they’re just released on the next occasion. This says nothing about the number and scale of changes being developed.
It’s like complaint that a pizza cut into 16 slices has too many calories, and you’d prefer it cut into 4 slices instead.
Yup, that’s what I’m saying. Number of features in the pipeline is unrelated to release frequency. Rust could have a new stable release every day, and it wouldn’t give it more or less features.
Do that, and now you’re responsible for doing security back-ports of every dependency. That’s potentially a lot more expensive than tracking newer releases.
So then don’t do that and track the newer releases. Life is a series of tradeoffs, pick some.
It just seems like a weird sense of entitlement at work here: “I don’t want to use the latest version of the compiler, and I don’t want to use older versions of dependencies because I don’t want to do any work to keep those dependencies secure. Instead I want the entire world to adopt my pace, regardless of what they’d prefer.”
The problem with that view is that it devalues the whole ecosystem. You have two choices:
Pay a cost to keep updating your code because it breaks with newer compilers.
Pay a cost to back-port security fixes because the new version of your dependencies have moved to an incompatible version of the language.
If these are the only choices then you have to pick one, but there’s always an implicit third choice:
Pick an ecosystem that values long-term stability.
To give a couple of examples from projects that I’ve worked on:
FreeBSD maintains very strong binary compatibility guarantees for C code. Kernel modules are expected to work with newer kernels within the same major revision and folks have to add padding to structures if they’re going to want to add fields later on. Userspace libraries in the base system all use symbol versioning, so functions can be deprecated, replaced with compat versions, and then hidden for linking by new programs. The C and C++ standards have both put a lot of effort into backwards compatibility. C++11 did have some syntactic breaks but they were fairly easy to mechanically fix (the main one was introducing user-defined string literals, which meant that you needed to insert spaces between string literals and macros in old code) but generally I can compile 10-20-year old code with the latest libraries and expect it to work. I can still compile C89 code with a C11 compiler. C23 will break C89 code that relies on some K&R features that were deprecated in 1989.
Moving away from systems code and towards applications, GNUstep uses Objective-C, which uses late binding by default and (for the last 15 years or so) even extends this to instance variables (fields) in objects, so you don’t even have an ABI break if a library adds a field to a class that you subclass. Apple has been a bit more aggressive about deprecating things in their OpenStep implementation (Cocoa), but there are quite a few projects still around that started in 1988 as NeXTSTEP apps and have gradually evolved to be modern macOS / iOS apps, with a multi-year window to fix the use of features that were removed or redesigned in newer versions of Cocoa. You can still compile a program with XCode today that will run linked against a version of the Cocoa frameworks in an OS release several years old.
The entitlement that you mention cuts both ways. If an ecosystem is saying ‘whatever you do, it’s going to be expensive, please come and contribute to the value of this ecosystem by releasing software in it!’ then my reaction will be ‘no thanks, I’ll keep contributing to places that value long-term stability because I want to spend my time adding new features, not playing catch up’.
LLVM has the same rapid-code-churn view of the world as Rust and it costs the ecosystem a lot. There are a huge number of interesting features that were implemented on forks and weren’t able to be upstreamed because the codebase has churned so much underneath it that updating was too much work for the authors.
Corroding codebases! this was my reason too for not switching from C++. Only last week I was thinking of dlang -betterC for my little “system programming” projects. It is now hard not to ignore rust. perhaps after one last attempt at learning ATS.
From the original email announcing the intention to upstream, by Philip Herron (emphasis mine):
[…] my current project plan brings us to November 2022
where we (unexpected events permitting) should be able to support
valid Rust code targeting Rustc version ~1.40 and reuse libcore,
liballoc and libstd. This date does not account for the borrow checker
feature and the proc macro crate, which we have a plan to implement,
but this will be a further six-month project.
It’s important to note that GCC Rust will be initially marked as “beta”.
As I am aware, it will influence one particular thing: platform support. A significant amount of embedded toolchains are still GCC based rather than LLVM, and a number of niche architectures have better supported GCC backend (or backends where LLVM may not have any). That said, for esoteric architectures Rust itself may not be suitable due to guarantees the language requires (for example, a requirement that signed integer types in release builds either panic or wrap around on overflow more or less requires a two’s complement implementation for performant code).
Note that Firefox already uses Rust, Firefox ESR(Extended Support Release) exists, and Firefox ESR keeps (does not update) Rust toolchain, in effect using old Rust toolchain for its support period. It is not like this is uncharted territory. I expect Rust for Linux to be unproblematic even if it happens today, as it is unproblematic for Firefox.
Firefox is not a great example. Last time I checked they used unreleased Rust features, which made them depend not on just a recent-enough Rust version, but tied them to specific nightly Rust builds.
Currently Rust support in Linux also uses some nightly features. Hopefully these features will get stabilized soon, so that Linux will be able to use stable versions and be more flexible about Rust version requirements.
So it’s a funny situation where commenters often say Rust is releasing things too fast, but for the two major Rust projects Rust isn’t releasing features fast enough.
It is a funny situation when a language hyped such as Rust can’t return a simple error when memory allocation fails. Linux requested this, that’s why it is using nightly.
Your information is out of date, and the issue you mention has been blown out of proportion. Early version of Rust for Linux cut corners by using container types from the alloc library designed for userland (where overcommit gets in the way of OOM handling). These containers have since been replaced it with kernel-compatible ones.
Rust for Linux is using nightly Rust to get access to many new core language features, such as generic associated types and associated type defaults, custom fat pointer metadata and control over dyn dispatch, coercions for unsized types, and several features for compile-time const eval.
This is honestly the only thing that’s been holding me back from making anything in rust. Now that it’s going into GCC there’s probably going to be a spec and hopefully slower and more stabler development. I don’t know what’s going to come after rust but I can’t find much of a reason to not jump ship from C++ anymore.
I doubt a new GCC frontend will be the reason a spec emerges. I would expect a spec to result from the needs of the safety and certification industry (and there already are efforts in that direction: https://ferrous-systems.com/blog/ferrocene-language-specification/ ) instead.
Thanks for highlighting that. We’re well on track to hit the committed release date (we’re in final polish, mainly making sure that the writing can be contributed to).
As per usual, slower and more stable development can be experienced but using the version of rust in your OS instead of whatever bleeding edge version upstream is shipping…
Unless one of your dependencies starts using new features as soon as possible.
Which is the exact same problem even when using GCC Rust, so it’s not really a relevant argument.
Stick with old version of dependency?
Let’s be honest, Rust uses evergreen policy, the ecosystem and tooling follows it, and fighting it is needless pain.
I still recommend to update the compiler regularly. HOWEVER, you don’t have to read the release notes. Just ignore whatever they say, and continue writing the code the way you used to. Rust keeps backwards compatibility.
Also, I’d like to highlight that release cadence has very little to do with speed of language evolution or its stability. Rust features still take years to develop, and they’re just released on the next occasion. This says nothing about the number and scale of changes being developed.
It’s like complaint that a pizza cut into 16 slices has too many calories, and you’d prefer it cut into 4 slices instead.
The time it takes to stabilize a feature doesn’t really matter though if there are many many features in the pipeline at all times.
Yup, that’s what I’m saying. Number of features in the pipeline is unrelated to release frequency. Rust could have a new stable release every day, and it wouldn’t give it more or less features.
Do that, and now you’re responsible for doing security back-ports of every dependency. That’s potentially a lot more expensive than tracking newer releases.
So then don’t do that and track the newer releases. Life is a series of tradeoffs, pick some.
It just seems like a weird sense of entitlement at work here: “I don’t want to use the latest version of the compiler, and I don’t want to use older versions of dependencies because I don’t want to do any work to keep those dependencies secure. Instead I want the entire world to adopt my pace, regardless of what they’d prefer.”
The problem with that view is that it devalues the whole ecosystem. You have two choices:
If these are the only choices then you have to pick one, but there’s always an implicit third choice:
To give a couple of examples from projects that I’ve worked on:
FreeBSD maintains very strong binary compatibility guarantees for C code. Kernel modules are expected to work with newer kernels within the same major revision and folks have to add padding to structures if they’re going to want to add fields later on. Userspace libraries in the base system all use symbol versioning, so functions can be deprecated, replaced with compat versions, and then hidden for linking by new programs. The C and C++ standards have both put a lot of effort into backwards compatibility. C++11 did have some syntactic breaks but they were fairly easy to mechanically fix (the main one was introducing user-defined string literals, which meant that you needed to insert spaces between string literals and macros in old code) but generally I can compile 10-20-year old code with the latest libraries and expect it to work. I can still compile C89 code with a C11 compiler. C23 will break C89 code that relies on some K&R features that were deprecated in 1989.
Moving away from systems code and towards applications, GNUstep uses Objective-C, which uses late binding by default and (for the last 15 years or so) even extends this to instance variables (fields) in objects, so you don’t even have an ABI break if a library adds a field to a class that you subclass. Apple has been a bit more aggressive about deprecating things in their OpenStep implementation (Cocoa), but there are quite a few projects still around that started in 1988 as NeXTSTEP apps and have gradually evolved to be modern macOS / iOS apps, with a multi-year window to fix the use of features that were removed or redesigned in newer versions of Cocoa. You can still compile a program with XCode today that will run linked against a version of the Cocoa frameworks in an OS release several years old.
The entitlement that you mention cuts both ways. If an ecosystem is saying ‘whatever you do, it’s going to be expensive, please come and contribute to the value of this ecosystem by releasing software in it!’ then my reaction will be ‘no thanks, I’ll keep contributing to places that value long-term stability because I want to spend my time adding new features, not playing catch up’.
LLVM has the same rapid-code-churn view of the world as Rust and it costs the ecosystem a lot. There are a huge number of interesting features that were implemented on forks and weren’t able to be upstreamed because the codebase has churned so much underneath it that updating was too much work for the authors.
Corroding codebases! this was my reason too for not switching from C++. Only last week I was thinking of dlang -betterC for my little “system programming” projects. It is now hard not to ignore rust. perhaps after one last attempt at learning ATS.
Is this the same project as this? https://rust-gcc.github.io/
According to this website:
I’m not sure if this frontend will influence anything without the borrow checker?
From the original email announcing the intention to upstream, by Philip Herron (emphasis mine):
It’s important to note that GCC Rust will be initially marked as “beta”.
https://github.com/Rust-GCC/gccrs/wiki/Frequently-Asked-Questions#mitigation-for-borrow-checking
They have a plan around it though:
(Polonius, for the uninitiated, is the next generation borrow checker for rustc that’s currently available as a library)
As I understand it (correct me if I’m wrong), Polonius is still both unfinished and very slow.
As I am aware, it will influence one particular thing: platform support. A significant amount of embedded toolchains are still GCC based rather than LLVM, and a number of niche architectures have better supported GCC backend (or backends where LLVM may not have any). That said, for esoteric architectures Rust itself may not be suitable due to guarantees the language requires (for example, a requirement that signed integer types in release builds either panic or wrap around on overflow more or less requires a two’s complement implementation for performant code).
This is no reason to build a new Rust frontend. GCC backend work is already underway and further along IIRC.
Clearly there is not consensus on that part, otherwise the work would not be receiving direct funding.
This is good stuff. Together with Rust for Linux hopefully it will make toolchain saner.
In what respect current Rust toolchain not sane?
See comment above about the “Rust uses evergreen policy”. Obviously for the linux kernel to intake Rust code this shouldn’t be the case.
Note that Firefox already uses Rust, Firefox ESR(Extended Support Release) exists, and Firefox ESR keeps (does not update) Rust toolchain, in effect using old Rust toolchain for its support period. It is not like this is uncharted territory. I expect Rust for Linux to be unproblematic even if it happens today, as it is unproblematic for Firefox.
Firefox is not a great example. Last time I checked they used unreleased Rust features, which made them depend not on just a recent-enough Rust version, but tied them to specific nightly Rust builds.
Currently Rust support in Linux also uses some nightly features. Hopefully these features will get stabilized soon, so that Linux will be able to use stable versions and be more flexible about Rust version requirements.
So it’s a funny situation where commenters often say Rust is releasing things too fast, but for the two major Rust projects Rust isn’t releasing features fast enough.
It is a funny situation when a language hyped such as Rust can’t return a simple error when memory allocation fails. Linux requested this, that’s why it is using nightly.
Your information is out of date, and the issue you mention has been blown out of proportion. Early version of Rust for Linux cut corners by using container types from the
alloc
library designed for userland (where overcommit gets in the way of OOM handling). These containers have since been replaced it with kernel-compatible ones.Rust for Linux is using nightly Rust to get access to many new core language features, such as generic associated types and associated type defaults, custom fat pointer metadata and control over
dyn
dispatch, coercions for unsized types, and several features for compile-time const eval.