Forbidding viral functions in combination with forbidding heavy annotations means they plan to do essentially nothing that doesn’t exist today. Even the lifetime safety profile with GSL attributes they proposed a few years ago doesn’t meet this criteria. At best they want to standardize existing common practice.
Their definition of heavy annotations is especially crazy to me. I recently added [[clang::lifetimebound]] to a codebase at way higher frequency than 1/1000 lines. [[gsl::Owner]] [[gsl::Pointer]] might also be more frequent than that. The pointer-zapping proposal could be more frequent than that.
Even the lifetime safety profile with GSL attributes they proposed a few years ago doesn’t meet this criteria
well, it’s an example of something that was too heavy to actually be useable in practice in most codebases. If I look on gh code search there’s less than 2k hits for gsl::owner and most seem to be pages and pages of forks of the VSCode docs. That’s a resounding failure considering that gsl::owner has been there since the very first commit of GSL in 2015 (https://github.com/tiagomacarios/GSL/commit/a9dcbe04ff330ef8297191d19951d4a313b2115a), a mere three months after Rust 1.0 release, and there’s 5.18 million C++ repositories on Github (https://api.github.com/search/repositories?q=language:cpp), most of them being new projects started after GSL’s introduction.
So any solution that has this level of intrusiveness in C++ just won’t solve the problem as it simply will not get used.
This is the cognitive dissonance for me. It seems like anything that will solve the problem simply will not get used, but people act like they’re solving the problem anyway.
I guess the most interesting part is the potential conflict between the desire for a memory safe C++ subset, and the following design principles. They seem to rule out annotation for safe functions, and lifetime annotations, both found in Rust:
3.3 Adoptability: Do not add a feature that requires viral annotation
Example, “viral downward”: We should not add a feature of the form “I can’t use it on this function/class without first using it on all the functions/classes it uses.” That would require bottom-up adoption, and has never
been successful at scale in any language. For example, we should not require a safe function annotation that has
the semantics that a safe function can only call other safe functions.
Example, “viral upward”: We should not add a feature of the form “I can’t use it on this function/class without
requiring all the functions/classes that use it to add it too.” For example, Java checked exception annotations require listing the types that a function can throw (see also 3.5), which creates a usability barrier for every caller of
that function to include those lists in its checked exception type list too; in practice the feature is not widely
used, and programmers effectively opt out and disable it by writing throws Exception on callers.
3.4 Adoptability: Do not add a feature that requires heavy annotation
“Heavy” means something like “more than 1 annotation per 1,000 lines of code.” Even when they provide significant advantages, such annotation-based systems have never been successfully adopted at scale, except sometimes intra-company when forced by top-down executive mandate (e.g., when Microsoft internally required SAL
annotations be used; but external customers haven’t adopted SAL at scale).
Yes, those were the only two principles that gave me pause. I understand the practical argument against adding features that (evidence shows) people won’t use … but it’s hard to see a path towards full safety that doesn’t involve these.
Rust, as you point out, is an interesting contrast. Developers enthusiastically adopted a new language that required these sorts of annotations, because they wanted that combination of systems programming and safety. Now, psychologically, does that adoption require a new language, or can it happen with an existing one?
I can think of possible counterexamples from the past: const pointers and rvalue references. Const is pretty viral, but very useful for safety. I don’t remember when it was added to C, but I remember adding it to code and discovering how it made me have to add it elsewhere, and deciding that was a Good Thing because it made me think more clearly about the design. Rvalue refs are (mostly) just a performance benefit, but people enthusiastically adopted them even though it required typing more ampersands and mysterious “move()” calls.
ANSI C introduced const. The other big one (of course) was function prototypes, which are not viral but are certainly heavy by Sutter’s metric. And prototypes were I think pretty popular - by the time I got online in the mid-1990s people were only coding in pre-standard C under duress and backwards-compatibility hacks like _P() macros were disappearing fast. I re-read Philip Hazel’s memoir which complained about Smail’s pre-standard C: “The lack of function prototypes had directly caused at least one serious bug.”
My computer graphics teacher in college was a fan of SAL, and he strongly encouraged students to use it, but I don’t think any of us actually did outside of the header file he gave us to get an assignment started. I don’t think I’ve seen it in use anywhere else since then. I’ve seen more compatibility layers for SAL on Linux than I’ve seen programs using SAL.
I’m annoyed because there’s a lot of stuff that C++ actually does well compared to Rust, where Rust development on the equivalent features has been stuck for years, but the committee seems determined to obviate itself.
Personally, I’m annoyed by the non-composability of Rust abstractions. You can’t provide const-generics with the result of const-functions, you can’t pass functions or lambdas as const-generics, and you can’t express functors with a homogenous interface to functions, and you can’t program the expansion of proc macros using types or constant evaluation, so a lot of real crates that exist to solve problems have very limited usefulness compared to similar features in C++, and patterns that are very useful for implementing zero-overhead abstractions (like std::conditional_t) aren’t useful in Rust. Rust is also a very non-variadic language, and workarounds I’ve seen with macros really just aren’t good enough in my opinion because they don’t compose well with other variadics or traits. Const functions are also very underpowered in Rust compared to other low-level languages besides C++, including D, Nim, and Zig. Rust doesn’t even have type introspection at the level of C++, which itself might actually be worse than those other 3 languages in that regard. These aspects of C++ also improve at a decent speed so far, while Rust’s have barely moved since 1.0 in my opinion.
The orphan rule makes trait-based solutions often much less appealing than generic or reflection based solutions that could exist (but don’t), and I think in practice the result is that Rust developers either choose between programming at a lower level than they could be OR programming with non-zero overhead abstractions that could be better. I point at controlled_option often because it’s a great case of something that is so easy in C++ that even I’ve done it, but it’s impossible in Rust without traits, and even without the orphan rule, Rust would need strong type aliases for this to be equally powerful. Dynamically chained logic and arithmetic crates or automatic differentiation are other good examples (Clang also has a language-integrated autodiff plugin). You also can’t really express a nice custom integer hierarchy in Rust because it absolutely forbids implicit conversions in all user defined types. I also think C++ has way way better SIMD libraries than Rust, and OpenMP which is a really nice technology.
Lesser notes, GCC and Clang have great and under-utilized plugin systems, which is non-existent in Rust (it used to exist, but was removed). I also personally think that static analysis is better in C++, but it’s not really amazing in either language. The same sentiment goes for REPLs for C++ and Rust. There are other little nitpicks, like pointer and vtable authentication which is a new Clang feature I’m excited about, but these are my main complaints about Rust. Not saying it’s a bad language or that people shouldn’t be using it, though.
Not your parent, but specialization for sure, also stuff like variadic generics. C++26 is getting reflection, we’ll see if Rust ever ends up doing that. Still a lot of constexpr stuff that is possible in C++ and not in Rust.
Look at the bright side: This is a very clear message to regulators and users alike.
Forbidding viral functions in combination with forbidding heavy annotations means they plan to do essentially nothing that doesn’t exist today. Even the lifetime safety profile with GSL attributes they proposed a few years ago doesn’t meet this criteria. At best they want to standardize existing common practice.
Their definition of heavy annotations is especially crazy to me. I recently added
[[clang::lifetimebound]]to a codebase at way higher frequency than 1/1000 lines.[[gsl::Owner]] [[gsl::Pointer]]might also be more frequent than that. The pointer-zapping proposal could be more frequent than that.They also voted to put this stuff in C++26, so.
well, it’s an example of something that was too heavy to actually be useable in practice in most codebases. If I look on gh code search there’s less than 2k hits for gsl::owner and most seem to be pages and pages of forks of the VSCode docs. That’s a resounding failure considering that gsl::owner has been there since the very first commit of GSL in 2015 (https://github.com/tiagomacarios/GSL/commit/a9dcbe04ff330ef8297191d19951d4a313b2115a), a mere three months after Rust 1.0 release, and there’s 5.18 million C++ repositories on Github (https://api.github.com/search/repositories?q=language:cpp), most of them being new projects started after GSL’s introduction.
So any solution that has this level of intrusiveness in C++ just won’t solve the problem as it simply will not get used.
This is the cognitive dissonance for me. It seems like anything that will solve the problem simply will not get used, but people act like they’re solving the problem anyway.
Here’s my take:
I guess the most interesting part is the potential conflict between the desire for a memory safe C++ subset, and the following design principles. They seem to rule out annotation for safe functions, and lifetime annotations, both found in Rust:
Example, “viral downward”: We should not add a feature of the form “I can’t use it on this function/class without first using it on all the functions/classes it uses.” That would require bottom-up adoption, and has never been successful at scale in any language. For example, we should not require a safe function annotation that has the semantics that a safe function can only call other safe functions. Example, “viral upward”: We should not add a feature of the form “I can’t use it on this function/class without requiring all the functions/classes that use it to add it too.” For example, Java checked exception annotations require listing the types that a function can throw (see also 3.5), which creates a usability barrier for every caller of that function to include those lists in its checked exception type list too; in practice the feature is not widely used, and programmers effectively opt out and disable it by writing throws Exception on callers.
“Heavy” means something like “more than 1 annotation per 1,000 lines of code.” Even when they provide significant advantages, such annotation-based systems have never been successfully adopted at scale, except sometimes intra-company when forced by top-down executive mandate (e.g., when Microsoft internally required SAL annotations be used; but external customers haven’t adopted SAL at scale).
Yes, those were the only two principles that gave me pause. I understand the practical argument against adding features that (evidence shows) people won’t use … but it’s hard to see a path towards full safety that doesn’t involve these.
Rust, as you point out, is an interesting contrast. Developers enthusiastically adopted a new language that required these sorts of annotations, because they wanted that combination of systems programming and safety. Now, psychologically, does that adoption require a new language, or can it happen with an existing one?
I can think of possible counterexamples from the past: const pointers and rvalue references. Const is pretty viral, but very useful for safety. I don’t remember when it was added to C, but I remember adding it to code and discovering how it made me have to add it elsewhere, and deciding that was a Good Thing because it made me think more clearly about the design. Rvalue refs are (mostly) just a performance benefit, but people enthusiastically adopted them even though it required typing more ampersands and mysterious “move()” calls.
ANSI C introduced const. The other big one (of course) was function prototypes, which are not viral but are certainly heavy by Sutter’s metric. And prototypes were I think pretty popular - by the time I got online in the mid-1990s people were only coding in pre-standard C under duress and backwards-compatibility hacks like
_P()macros were disappearing fast. I re-read Philip Hazel’s memoir which complained about Smail’s pre-standard C: “The lack of function prototypes had directly caused at least one serious bug.”My computer graphics teacher in college was a fan of SAL, and he strongly encouraged students to use it, but I don’t think any of us actually did outside of the header file he gave us to get an assignment started. I don’t think I’ve seen it in use anywhere else since then. I’ve seen more compatibility layers for SAL on Linux than I’ve seen programs using SAL.
I’m annoyed because there’s a lot of stuff that C++ actually does well compared to Rust, where Rust development on the equivalent features has been stuck for years, but the committee seems determined to obviate itself.
What sort of things are you thinking about? I’m guessing specialisation is on the list.
Personally, I’m annoyed by the non-composability of Rust abstractions. You can’t provide const-generics with the result of const-functions, you can’t pass functions or lambdas as const-generics, and you can’t express functors with a homogenous interface to functions, and you can’t program the expansion of proc macros using types or constant evaluation, so a lot of real crates that exist to solve problems have very limited usefulness compared to similar features in C++, and patterns that are very useful for implementing zero-overhead abstractions (like
std::conditional_t) aren’t useful in Rust. Rust is also a very non-variadic language, and workarounds I’ve seen with macros really just aren’t good enough in my opinion because they don’t compose well with other variadics or traits. Const functions are also very underpowered in Rust compared to other low-level languages besides C++, including D, Nim, and Zig. Rust doesn’t even have type introspection at the level of C++, which itself might actually be worse than those other 3 languages in that regard. These aspects of C++ also improve at a decent speed so far, while Rust’s have barely moved since 1.0 in my opinion.The orphan rule makes trait-based solutions often much less appealing than generic or reflection based solutions that could exist (but don’t), and I think in practice the result is that Rust developers either choose between programming at a lower level than they could be OR programming with non-zero overhead abstractions that could be better. I point at controlled_option often because it’s a great case of something that is so easy in C++ that even I’ve done it, but it’s impossible in Rust without traits, and even without the orphan rule, Rust would need strong type aliases for this to be equally powerful. Dynamically chained logic and arithmetic crates or automatic differentiation are other good examples (Clang also has a language-integrated autodiff plugin). You also can’t really express a nice custom integer hierarchy in Rust because it absolutely forbids implicit conversions in all user defined types. I also think C++ has way way better SIMD libraries than Rust, and OpenMP which is a really nice technology.
Lesser notes, GCC and Clang have great and under-utilized plugin systems, which is non-existent in Rust (it used to exist, but was removed). I also personally think that static analysis is better in C++, but it’s not really amazing in either language. The same sentiment goes for REPLs for C++ and Rust. There are other little nitpicks, like pointer and vtable authentication which is a new Clang feature I’m excited about, but these are my main complaints about Rust. Not saying it’s a bad language or that people shouldn’t be using it, though.
Not your parent, but specialization for sure, also stuff like variadic generics. C++26 is getting reflection, we’ll see if Rust ever ends up doing that. Still a lot of constexpr stuff that is possible in C++ and not in Rust.
Specialization, const generics that actually work as well as C++ int template parameters, placement new, variadics, if constexpr, there’s a lot!
Feels like the Downfall meme with all the infighting.