A modern C++ CI system will also catch a lot of these, so using Go as the baseline feels a bit mean:
Resource leaks: RAII for all resources, warn on explicit mutex lock rather than things like std::unique_lock / std::lock_guard (one is lexically scoped, the other allows the lock ownership to be transferred). clang-tidy and friends can warn about many of these.
Unreleased mutexes are a special case of RAII
Missing switch cases are compiler warnings.
Option types. Rust is definitely better than C++ here. You can use std::optional and, I think, most implementations special case pointers so that they compile down to using nullptr as the not-present representation, but require explicit checks for access (though you’re dependent on something like the clang static analyser to notice things where you’re using the direct accessor without checking for validity first, rather than the value_or accessor). This is very much ‘you won’t see this in C++ code written with a good style guide’ vs ‘you can’t express this at all in Rust’.
Uninitialised local variables will warn but uninitialised struct fields are particularly painful in C++ and there are lots of ways of writing them that look correct but aren’t.
Unhandled explicit errors are not too much of a problem ([[nodiscard]] requires you to do something with the return, LLVM has some error templates that abort if you don’t check the error even on non-error results, though catching this at compile time is nicer than failing in the first test that uses the functionality). Unchecked exceptions are a disaster and are a big part of the reason why I prefer -fno-exceptions for C++ code. Win for Rust here.
Data races is a fun one because Rust’s Sync trait can be implemented in safe Rust only for immutable objects. Any shared mutable state requires unsafe, including standard traits such as ARC. This, in turn, means that you’re reliant on this unsafe code being correct when composed with all other unsafe code in the same program. Still a nicer place to be than C++ though.
I’m not sure I understand the Hidden Streams argument so I can’t comment on it.
Rust definitely has some wins relative to modern C++. When we looked at this, our conclusion was:
Rust and modern C++ with static analysis required for CI prevent very similar levels of bugs.
C++ is often better in places where you’re doing intrinsically unsafe things (e.g. a memory allocator) because the Rust analysis tooling is much better for the safe subset of the language than the unsafe subset, whereas all C++ analysis tools are targeted at unsafe things.
Preventing developers from committing code that doesn’t compile to a repo is orders of magnitude easier than preventing them from deciding that they know better than a static analyser and marking real bugs as analyser false positives.
The last one of these is really critical. C++ code can avoid these bugs, Rust code must avoid them unless you use unsafe. We assumed that avoiding unsafe was something code review would easily handle, though since I’ve learned that Facebook has millions of lines of unsafe Rust code I’m now far less confident in that claim.
The last one of these is really critical. C++ code can avoid these bugs, Rust code must avoid them unless you use unsafe. We assumed that avoiding unsafe was something code review would easily handle, though since I’ve learned that Facebook has millions of lines of unsafe Rust code I’m now far less confident in that claim.
This is what makes -Wall -Werror an attractive nuisance of sorts. Unfortunately, the projects I’ve worked on that would benefit most from those also had enough spurious warnings triggered by headers for dependencies that it was never practical to leave them enabled.
You can use std::optional and, I think, most implementations special case pointers so that they compile down to using nullptr as the not-present representation.
You mean references, not pointers, right? If this were done for pointers, then the “present NULL” and “absent” states would be indistinguishable.
I meant pointers, but you’ve made me realise that I’m probably wrong. std::optional is not defined for T&.
If this were done for pointers, then the “present NULL” and “absent” states would be indistinguishable.
I had always assumed that std::optional<T*> x{nullptr} would give a not-present value but a quick test suggests that this is not the case. I’ve learned something today!
is the nullpointer optimization for optional pointers legal in C++? It seems like since there’s no idea of nonnullable pointers, you can’t make the optimization safely since you should be able to distinguish between “no value” and the nullptr. Also the thing with Rust is that when you’re implementing data structures, which is something that facebook will do a lot throughout their codebase, you often do need unsafe. The point is that you can wrap these unsafe blocks in safer abstractions and then the calling code needs less review. That’s not to say unsound code never pops up, but having unsafe operations constrained to specific places is still really helpful.
panic!() exists in Rust, but that’s not how recoverable errors are handled.
As an occasional user of long-running programs written in Rust, I’m glad that I’m (probably) protected against certain types of bugs, but I wish panics weren’t so common.
A controlled crash is better than an uncontrolled one, but that’s of little comfort when my work is lost.
And it’s often used for recoverable errors, in my experience. For example, allocation failure is rarely handled, and the assumption seems to be that everyone does overcommit.
Allocation failure doesn’t panic, it aborts the process. There are plans to make it panic, this is not yet stable. The motivation, as far as a recall, is not overcommit, but rather bad experiences in Firefox with handling allocation failures, which, in C++, often escalated allocation failure to straight up UB. At the time of Rust 1.0, it wasn’t clear if panicking alloc would be easy enough to handle in unsafe code, so conservative option of just aborting was chosen. To re-iterate, I think this was the story, but I am not 100% sure.
I think that depending on the kind of application, a panic upon failing to allocate (or lock a mutex, access stdin..) is valid. At least for applications that don’t allocate much (not a video renderer with gigs of usage), which I’d presume are the rust programs we’re talking about.
That said, I certainly do prefer a controlled crash for many things (especially on server software), if only such that it’s going through my own logging framework and giving additional context instead of throwing a panic into the stdout void. But I’d also expect applications handling userdata to use auto-backup files such that a mid flight crash (OS hangup) doesn’t loose everything just because I didn’t press save for the last hour. (Which I do, out of fear experience..) And I do use expect on mutex, thread joins and channels.
It’s so easy to forget to close a file or a connection
That’s a design choice of the http.Client. I wrote an HTTP library that makes it impossible to forget to close the connection. It’s just a matter of accepting a callback. There’s no need for any special language features here.
I’ve never seen this error in production. Is it really a common mistake, or is it an easy thing for the compiler to enforce? There are Go linters for it, for example.
Invalid pointer dereference
This happens, but .unwrap() also panics. I think the main thing here is to force the programmer to think about it.
Uninitialized variables
This is a weird one to complain about Go for. They’re all initialized to zero.
Data races
There’s the Go race detector to detect this in tests, but yes, it’s nice to catch it sooner. Is it worth the cost in development time though?
Hidden Streams
I don’t understand this one. If you use a type wrong, it does the wrong thing. The io types are one of Go’s best features.
I’ve never seen this error in production. Is it really a common mistake, or is it an easy thing for the compiler to enforce? There are Go linters for it, for example.
It definitely is, or there wouldn’t be linters for it. Even a catch-all default case has caused errors in code for me. No t crashes, but logic errors or UX errors which can only be found through manual testing.
but .unwrap() also panics
That’s the entire point. I wish it were spelt !! as in Kotlin, since that sounds out more.
They’re all initialized to zero
This is nonsensical, and dangerous too. Just like a broad default case, it causes hard-to-detect logic errors.
If you add a new field to a type, do you want to break all downstream libraries or not? The argument here seems to be, yes, break everyone dependent on your package because you want a new field. That’s a hard pill to swallow, IMO. So, if you don’t do that, what then? Go’s answer is, there’s a simple rule that all structs everywhere have the same zero value defaults, so if you add a new field, make it work with that default. Sometimes that ends up being awkward because you really want “BeSecure” to be true by default, so you name it “BeInsecure” so that it’s false by default, but for the most part it works out. If you really want to break your downstream, rename an existing field to WhateverV2 or only accept a V2Options struct. There are tons of ways of breaking code if that’s the goal.
If you are adding a new field to a struct you’re vending to others, it is your responsibility to either communicate that it’s a breaking change, or vend a new struct, or add logic to handle a missing value. But you can’t conflate missing values with »zero« values. There is a difference between nil and an empty string, or nil and 0. For a particular type (dates, times, telephone numbers, addresses, music notes), a »zero« value will be pure nonsense or an error. This is something a sum type like Optional or Result can express, and it is an important distinction.
A modern C++ CI system will also catch a lot of these, so using Go as the baseline feels a bit mean:
std::unique_lock
/std::lock_guard
(one is lexically scoped, the other allows the lock ownership to be transferred). clang-tidy and friends can warn about many of these.std::optional
and, I think, most implementations special case pointers so that they compile down to usingnullptr
as the not-present representation, but require explicit checks for access (though you’re dependent on something like the clang static analyser to notice things where you’re using the direct accessor without checking for validity first, rather than thevalue_or
accessor). This is very much ‘you won’t see this in C++ code written with a good style guide’ vs ‘you can’t express this at all in Rust’.[[nodiscard]]
requires you to do something with the return, LLVM has some error templates that abort if you don’t check the error even on non-error results, though catching this at compile time is nicer than failing in the first test that uses the functionality). Unchecked exceptions are a disaster and are a big part of the reason why I prefer-fno-exceptions
for C++ code. Win for Rust here.Sync
trait can be implemented in safe Rust only for immutable objects. Any shared mutable state requiresunsafe
, including standard traits such asARC
. This, in turn, means that you’re reliant on this unsafe code being correct when composed with all other unsafe code in the same program. Still a nicer place to be than C++ though.Rust definitely has some wins relative to modern C++. When we looked at this, our conclusion was:
The last one of these is really critical. C++ code can avoid these bugs, Rust code must avoid them unless you use
unsafe
. We assumed that avoidingunsafe
was something code review would easily handle, though since I’ve learned that Facebook has millions of lines of unsafe Rust code I’m now far less confident in that claim.This is what makes
-Wall -Werror
an attractive nuisance of sorts. Unfortunately, the projects I’ve worked on that would benefit most from those also had enough spurious warnings triggered by headers for dependencies that it was never practical to leave them enabled.Tip: Use -isystem to include dependencies. Tells the compiler that it’s not your code.
You mean references, not pointers, right? If this were done for pointers, then the “present
NULL
” and “absent” states would be indistinguishable.I meant pointers, but you’ve made me realise that I’m probably wrong.
std::optional
is not defined forT&
.I had always assumed that
std::optional<T*> x{nullptr}
would give a not-present value but a quick test suggests that this is not the case. I’ve learned something today!is the nullpointer optimization for optional pointers legal in C++? It seems like since there’s no idea of nonnullable pointers, you can’t make the optimization safely since you should be able to distinguish between “no value” and the nullptr. Also the thing with Rust is that when you’re implementing data structures, which is something that facebook will do a lot throughout their codebase, you often do need unsafe. The point is that you can wrap these unsafe blocks in safer abstractions and then the calling code needs less review. That’s not to say unsound code never pops up, but having unsafe operations constrained to specific places is still really helpful.
As an occasional user of long-running programs written in Rust, I’m glad that I’m (probably) protected against certain types of bugs, but I wish panics weren’t so common.
A controlled crash is better than an uncontrolled one, but that’s of little comfort when my work is lost.
And it’s often used for recoverable errors, in my experience. For example, allocation failure is rarely handled, and the assumption seems to be that everyone does overcommit.
Allocation failure doesn’t panic, it aborts the process. There are plans to make it panic, this is not yet stable. The motivation, as far as a recall, is not overcommit, but rather bad experiences in Firefox with handling allocation failures, which, in C++, often escalated allocation failure to straight up UB. At the time of Rust 1.0, it wasn’t clear if panicking alloc would be easy enough to handle in unsafe code, so conservative option of just aborting was chosen. To re-iterate, I think this was the story, but I am not 100% sure.
I think there are also APIs being added to allow handling failure, but it remains to be seen if they will become widely used.
I just think it would be nice if things didn’t crash so much, in addition to not having certain types of vulnerabilities. :)
I think that depending on the kind of application, a panic upon failing to allocate (or lock a mutex, access stdin..) is valid. At least for applications that don’t allocate much (not a video renderer with gigs of usage), which I’d presume are the rust programs we’re talking about.
That said, I certainly do prefer a controlled crash for many things (especially on server software), if only such that it’s going through my own logging framework and giving additional context instead of throwing a panic into the stdout void. But I’d also expect applications handling userdata to use auto-backup files such that a mid flight crash (OS hangup) doesn’t loose everything just because I didn’t press save for the last hour. (Which I do, out of
fearexperience..) And I do useexpect
on mutex, thread joins and channels.That’s a design choice of the http.Client. I wrote an HTTP library that makes it impossible to forget to close the connection. It’s just a matter of accepting a callback. There’s no need for any special language features here.
Ditto.
I’ve never seen this error in production. Is it really a common mistake, or is it an easy thing for the compiler to enforce? There are Go linters for it, for example.
This happens, but
.unwrap()
also panics. I think the main thing here is to force the programmer to think about it.This is a weird one to complain about Go for. They’re all initialized to zero.
There’s the Go race detector to detect this in tests, but yes, it’s nice to catch it sooner. Is it worth the cost in development time though?
I don’t understand this one. If you use a type wrong, it does the wrong thing. The io types are one of Go’s best features.
It definitely is, or there wouldn’t be linters for it. Even a catch-all default case has caused errors in code for me. No t crashes, but logic errors or UX errors which can only be found through manual testing.
That’s the entire point. I wish it were spelt
!!
as in Kotlin, since that sounds out more.This is nonsensical, and dangerous too. Just like a broad
default
case, it causes hard-to-detect logic errors.If you add a new field to a type, do you want to break all downstream libraries or not? The argument here seems to be, yes, break everyone dependent on your package because you want a new field. That’s a hard pill to swallow, IMO. So, if you don’t do that, what then? Go’s answer is, there’s a simple rule that all structs everywhere have the same zero value defaults, so if you add a new field, make it work with that default. Sometimes that ends up being awkward because you really want “BeSecure” to be true by default, so you name it “BeInsecure” so that it’s false by default, but for the most part it works out. If you really want to break your downstream, rename an existing field to WhateverV2 or only accept a V2Options struct. There are tons of ways of breaking code if that’s the goal.
If you are adding a new field to a struct you’re vending to others, it is your responsibility to either communicate that it’s a breaking change, or vend a new struct, or add logic to handle a missing value. But you can’t conflate missing values with »zero« values. There is a difference between
nil
and an empty string, ornil
and0
. For a particular type (dates, times, telephone numbers, addresses, music notes), a »zero« value will be pure nonsense or an error. This is something a sum type likeOptional
orResult
can express, and it is an important distinction.Go has generics now, so nothing stops you from making a field Optional[time.Time] or whatever.
How does that look without sum types?
Optional doesn’t need a sum type. You have a value and a bool for validity.
type Optional[T any] struct { value T; valid bool }
.Does
Result
need a sum type?No because Result is T, error. An arbitrary sum type of T1, T2, T3… would require sum types, which Go does not current have.