I just discovered ~const Trait and I’m not sure I really like the direction the language is going to now… I recognize this is needed, but the more we add stuff to Rust, the more I start to agree that it becomes closer and closer to C++.
There’s a tension between the things most developers want and what library authors want. On the one hand, Rust is “done” and has been for a while: you can write code to accomplish anything you want. On the other, there are blatant missing things, accomplishing some niche things require convoluted code that can melt away with more language features.
The trick is to make the features for the latter group unnecessary for the former in their day to day life. A crate author can use const bounds, but to the consumer of that crate they just use it and it either works without them even thinking about it or get a(n ideally) clear error telling them why they can’t in that context.
It is a hard conversation to have because every new feature in isolation pretty much always looks reasonable, but the problem is perceived only in aggregate. To try to combat this, RFCs are encouraged to imagine other requested features during design, do that we can try and have a coherent language. Maybe const bounds have some watts, which are pretty much born out of backwards compatibility constraints. I still want them, though.
On the one hand, Rust is “done” and has been for a while
I don’t think this is true; the language doesn’t have async trait methods; the const issue here is real; strings should be usable at compile-time (it’s so useful in Zig); more generics should be allowed; I would like to have rank-2 types (i.e. HRTB for types, as in T: for<T: Trait> …`.
A crate author can use const bounds, but to the consumer of that crate they just use it and it either works without them even thinking about it or get a(n ideally) clear error telling them why they can’t in that context.
I agree, but if you interact with such a code in a company / a project you contribute to, the problem is still there.
Many other RFCs I’ve read have a section reasoning about “Status Quo”, i.e. what would happen if we don’t implement this feature. This RFC seems to lack that kind of reasoning.
That section is indeed probably needed as you mentioned.
I think advancing the state of const in the language is indeed required, as it allows to discharge the runtime. However, the example with Default and the free function is probably a bit skewed?
const fn default<T: Default>() -> T {
T::default()
}
This doesn’t compile because Default::default() is not marked const. Hence, the problem lies in the fact that the implementation might want to make it const, and so the problem lies in the trait definition. What I’m not sure I agree there is why make the trait const, and instead not declare an additional const method for it?
And then when you want to call Default::default() in a generic context, the type system needs to check that the call
is actually implemented in terms of const fn. Every function is in theory of both forms.
Expecting a fn and getting a const fn instead should be allowed by the compiler.
I’ve been making this case for a while. It is a similar approach to that of Erlang, and it has its risks (like people complaining that some libraries have access to features that others don’t, or having foundational libraries that have additional maintenance costs to account for every rust version), but I believe the positives are more than the negatives.
This is an interesting idea, but it does break the Rust stability guarantees, which says that updating the stable Rust compiler without updating any crates should not break compilation. I know this has been broken before, but preview crates would require you to update your crates when updating the compiler (assuming anything has changed with the preview feature).
The way preview crates require updating would also break with cargo -Z minimal-versions update.
Yeah that seems like the biggest issue with the idea IMO. I like the idea of leaning on crates.io for stats, but having the implementation actually being outside the compiler, thus subject to Cargo.lock, seems too problematic.
Maybe you can get the best of both by making the external crate reexport the macro from std so what’s locked doesn’t matter. Basically the external crate could be something like:
I think that’s basically what this part of the post is suggesting?
But I figure we still handle it by actually having the preview functionality exposed by crates in sysroot that are shipping along the compiler. These crates would not be directly usable except by our blessed crates.io crates, but they would basically just be shims that expose the underlying stuff.
That’s not what I understand: I think “actual” implementation (referred to just above your quote) is the lang feature, but the crate would still have the macro implementation.
The reason for my interpretation is the “release 2.0” paragraph:
No problem, we release a 2.0 version of the crate and we also rewrite 1.0 to take in the tokens and invoke 2.0 using the semver trick.
This is only required if the crate contains the macro implementation, which is equivalent to saying the macro is subject to the lockfile.
I think Niko is just throwing a bunch of random ideas of how it might work and seeing what might stick. I mean, the part I quoted was under this question:
But would this actually work? What’s in that crate and what if it is not matched with the right version of the compiler?
And shipping sysroot crates with the compiler which actually use the compiler feature while the crates on crates.io just wrap those sysroot crates would in fact solve that problem, so I think that is what he means in that sections.
I don’t think I want this. To me it seems like another instability factor. I want my crates to be on stable, have #![forbid(unsafe_code)] and conservative MSRV policy. This seem like Yet Another Thing where some dependency I pull in might start using half stable semi-nightly features. I guess it could be OK, if it was combined with some lint where I can forbid its use from any of my dependencies.
As an application developer: why not give me access to #[feature(const_item)] (for a subset of features) on stable? It’s pretty clear that the tradeoff is more work when you update, but this isn’t that bad? Seems like putting the maintenance burden on the rust project is a bit of a waste. Same for non-published crates.
As a (public) crate developer: Cool! But it’s hard to wrap my head around exposing preview features to my users, I think? What if I use preview v1 and a different dep uses preview v2?
What if I use preview v1 and a different dep uses preview v2?
It would be perfectly fine. They’d just be macros that translate a propose syntax into something the compiler understands, and you could have an infinite number of different versions at the same time without issue.
Go has golang.org/x/exp for experimenting with standard library additions, but they don’t get access to extra compiler features AFAIK. And are all (un)versioned together: you pin the module to a commit hash. So updating the module for one package, or due to version conflict resolution can cause breakage in another. In practice I haven’t seen that happen though, so they’re likely not experimenting at the rate preview crates would.
I just discovered
~const Traitand I’m not sure I really like the direction the language is going to now… I recognize this is needed, but the more we add stuff to Rust, the more I start to agree that it becomes closer and closer to C++.There’s a tension between the things most developers want and what library authors want. On the one hand, Rust is “done” and has been for a while: you can write code to accomplish anything you want. On the other, there are blatant missing things, accomplishing some niche things require convoluted code that can melt away with more language features.
The trick is to make the features for the latter group unnecessary for the former in their day to day life. A crate author can use const bounds, but to the consumer of that crate they just use it and it either works without them even thinking about it or get a(n ideally) clear error telling them why they can’t in that context.
It is a hard conversation to have because every new feature in isolation pretty much always looks reasonable, but the problem is perceived only in aggregate. To try to combat this, RFCs are encouraged to imagine other requested features during design, do that we can try and have a coherent language. Maybe const bounds have some watts, which are pretty much born out of backwards compatibility constraints. I still want them, though.
I don’t think this is true; the language doesn’t have
asynctrait methods; theconstissue here is real; strings should be usable at compile-time (it’s so useful in Zig); more generics should be allowed; I would like to have rank-2 types (i.e. HRTB for types, as in T: for<T: Trait> …`.I agree, but if you interact with such a code in a company / a project you contribute to, the problem is still there.
Is it needed though?
Many other RFCs I’ve read have a section reasoning about “Status Quo”, i.e. what would happen if we don’t implement this feature. This RFC seems to lack that kind of reasoning.
That section is indeed probably needed as you mentioned.
I think advancing the state of
constin the language is indeed required, as it allows to discharge the runtime. However, the example withDefaultand the free function is probably a bit skewed?This doesn’t compile because
Default::default()is not markedconst. Hence, the problem lies in the fact that the implementation might want to make itconst, and so the problem lies in the trait definition. What I’m not sure I agree there is why make the traitconst, and instead not declare an additionalconstmethod for it?Or maybe that we should make it so that runtime functions contain
constfunctions:And then when you want to call
Default::default()in a generic context, the type system needs to check that the call is actually implemented in terms ofconst fn. Every function is in theory of both forms.Expecting a
fnand getting aconst fninstead should be allowed by the compiler.Note that making Default const is almost trivial: https://github.com/rust-lang/rust/pull/134628
Thanks, I understand the problem now. I guess I’m not clear on how much of Rust we want to evaluate at compile time?
I think the more we can do at compile-time, the better, especially for things like const generics (e.g.
fn foo<const N: usize>()).I’ve been making this case for a while. It is a similar approach to that of Erlang, and it has its risks (like people complaining that some libraries have access to features that others don’t, or having foundational libraries that have additional maintenance costs to account for every rust version), but I believe the positives are more than the negatives.
This is an interesting idea, but it does break the Rust stability guarantees, which says that updating the stable Rust compiler without updating any crates should not break compilation. I know this has been broken before, but preview crates would require you to update your crates when updating the compiler (assuming anything has changed with the preview feature).
The way preview crates require updating would also break with
cargo -Z minimal-versions update.Yeah that seems like the biggest issue with the idea IMO. I like the idea of leaning on crates.io for stats, but having the implementation actually being outside the compiler, thus subject to
Cargo.lock, seems too problematic.Maybe you can get the best of both by making the external crate reexport the macro from std so what’s locked doesn’t matter. Basically the external crate could be something like:
And users will always use v1 until they update this crate.
I think that’s basically what this part of the post is suggesting?
That’s not what I understand: I think “actual” implementation (referred to just above your quote) is the lang feature, but the crate would still have the macro implementation.
The reason for my interpretation is the “release 2.0” paragraph:
This is only required if the crate contains the macro implementation, which is equivalent to saying the macro is subject to the lockfile.
I think Niko is just throwing a bunch of random ideas of how it might work and seeing what might stick. I mean, the part I quoted was under this question:
And shipping sysroot crates with the compiler which actually use the compiler feature while the crates on crates.io just wrap those sysroot crates would in fact solve that problem, so I think that is what he means in that sections.
I don’t think I want this. To me it seems like another instability factor. I want my crates to be on stable, have
#![forbid(unsafe_code)]and conservative MSRV policy. This seem like Yet Another Thing where some dependency I pull in might start using half stable semi-nightly features. I guess it could be OK, if it was combined with some lint where I can forbid its use from any of my dependencies.As an application developer: why not give me access to #[feature(const_item)] (for a subset of features) on stable? It’s pretty clear that the tradeoff is more work when you update, but this isn’t that bad? Seems like putting the maintenance burden on the rust project is a bit of a waste. Same for non-published crates.
As a (public) crate developer: Cool! But it’s hard to wrap my head around exposing preview features to my users, I think? What if I use preview v1 and a different dep uses preview v2?
It would be perfectly fine. They’d just be macros that translate a propose syntax into something the compiler understands, and you could have an infinite number of different versions at the same time without issue.
Go has golang.org/x/exp for experimenting with standard library additions, but they don’t get access to extra compiler features AFAIK. And are all (un)versioned together: you pin the module to a commit hash. So updating the module for one package, or due to version conflict resolution can cause breakage in another. In practice I haven’t seen that happen though, so they’re likely not experimenting at the rate preview crates would.