I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.
Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.
In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.
What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?
A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.
What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?
One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
(According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)
When go vet automatically runs on go test, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technically gofmt is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).
That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.
Even in projects where I don’t have tests, I still run go test ./... when I want to check if the code compiles. If I used go build I would have an executable that I would need to throw away. Being lazy, I do go test instead.
Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.
Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does -Werror; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).
All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.
And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.
Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.
I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run go vet explicitly because I use gopls. Maybe I’m in a small subset going the LSP route, but as far as I can tell gopls by default has good overlap with go vet.
But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with rust-analyzer too.
On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem
Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.
I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:
go log.Println(http.ListenAndServe("localhost:6060", nil))
…
Jeeze, I keep making so many mistakes with such a simple language, I must really be dense or something.
Let’s see… ah! We have to wrap it all in a closure, otherwise it waits for http.ListenAndServe to return, so it can then spawn log.Println on its own goroutine.
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the go statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.
Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.
I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)
But could you elaborate on this?
evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.
IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.
Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.
At least they mention go vet so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.
But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of go vet.
This also seems unnecessary:
Why we need to move it into a separate package to make that happen, or why the visibility of symbols is tied to the casing of their identifiers… your guess is as good as mine.
Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.
I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.
Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple
All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…
Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.
The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that rustc does, this is not really how things play out. The article demonstrates bugs which go vet can not find which are precluded by Rust’s language definition – that is real and substantive information.
There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.
It is definitely not correct — the read of pc.value needs to be guarded by the mutex, same as the write.
This is concurrency 101 stuff. And it’s totally fair to say that the subtlety here is difficult and worth solving at a language level! But if you’re going to offer a critique, I think you need to have a better understanding than what’s demonstrated here.
If you want your system to be sequentially consistent, then you need to ensure reads and writes are issued in the same order as if the program were run sequentially. In this case, if you don’t guard the read with a mutex, then your reads and writes could be reordered; a read may occur before a write even if it needed to occur after. For some applications this is fine (then your application does not need sequential consistency), but some form of locking must occur here to keep the operation sequentially consistent. An alternative (not a preferred one, just one) here would be to use a reader-writer lock. When the writer has this lock, readers are excluded from the critical section; multiple readers can share a lock but writers are blocked until readers release the lock.
Because the memory model of the language requires it. No operation is safe unless explicitly documented to be safe (i.e. sync/atomic). And there is no such thing as a benign data race ;)
Him describing using a trivial Go api which includes a code snippet at the top of the docs he linked:
Oh! OH! We’re supposed to spawn the server in its own goroutine haha, what a silly mistake. I hope no one else ever does that silly silly mistake. It’s probably just me.
Him describing using async rust:
We can also do that with async, say, with the tokio crate:
ah look at these idiots, using sharp tools without protections! Hey, idiots! Wanna see how we do it here ?!
I do like Rust but these kind of rants are really perpetuating the image of the snug rust developer. I know the rest of the community is friendlier but ouch that’s disappointing and somewhat harmful.
Also, wow so many unwrap-s in that beautiful and idiomatic Rust code… :p
Regarding the deadlock at the end, besides miri that may not be suitable for application code, are there other tools like linters or runtime check to be able to detect and debug these problems?
edit apparently there’s at least an experimental deadlock detector in the parking_lot library. Question still stands for other usecases.
There’s some irony in the first example where the post opens talking about mistakes that the language can simply make impossibly by leaving features out, and then it talks about different problems caused by early returns. Turns out if your language disallows early returns, none of that nonsense is even possible.
Turns out if your language disallows early returns, none of that nonsense is even possible
I don’t think any languages I’ve used regularly have made that design choice, although it is one I’ve had enforced through static analysis on code bases I’ve worked on. Are you thinking of any off hand that do that? That’s not enough alone to make me switch to a new language, just curious to learn more.
Sure; almost every lisp works this way, as do Erlang and Elixir. I believe it’s true of OCaml too but I’m not positive. Probably Haskell and Forth, though you might get into an argument on what it even means for a return to be early in those languages.
Lisp has block/return-from (not to mention tagbody/go), and scheme has call/cc. Forth permits first-class access to the return stack. Ocaml and haskell both have exceptions.
Common Lisp isn’t “most lisps”, and in context call/cc is not equivalent to early returns as pertains to the problem of unreachable code described in the context of the post.
This is a funny first example to give, since cargo clippy actually does catch this error when implementing the standard Add trait, which is what someone would normally do for a custom type (instead of just writing a bare function called add).
use std::ops::Add;
struct U(u32);
impl Add for U {
type Output = U;
fn add(self, other: Self) -> U {
U(self.0 - other.0)
}
}
$ cargo clippy
warning: suspicious use of `-` in `Add` impl
I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.
Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.
In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.
A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.
One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
(According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)
When
go vet
automatically runs ongo test
, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technicallygofmt
is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.
Even in projects where I don’t have tests, I still run
go test ./...
when I want to check if the code compiles. If I usedgo build
I would have an executable that I would need to throw away. Being lazy, I dogo test
instead.Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.
Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does
-Werror
; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).
All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.
Subjecting warnings to compatibility guarantees is something that C is coming to regret (prior discussion).
And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.
The difference is one language brings the auditing into the tooling. In C, it’s all strapped on from outside.
Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.
I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run
go vet
explicitly because I usegopls
. Maybe I’m in a small subset going the LSP route, but as far as I can tellgopls
by default has good overlap withgo vet
.But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with
rust-analyzer
too.On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem
Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.
I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:
…
There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the
go
statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.
In practice, about 99% of uses of the
go
keyword are in the formgo func() {}()
. Maybe we should optimize for the more common case?I did a search of my code repo, and it was ⅔
go func() {}()
, so you’re right that it’s the common case, but it’s not the 99% case.I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)
But could you elaborate on this?
IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.
Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.
At least they mention
go vet
so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of
go vet
.This also seems unnecessary:
Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.
I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.
Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple
All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…
Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.
the author has years of Go experience. He doesn’t want to be generous, he has an axe to grind.
So where’s the relevant docs for why
or
This is simply not true. I’m not sure why the author claims it is.
This is Go fundamental knowledge.
Yes, I’m talking about the rationale.
https://go.dev/tour/basics/3
rationale, n.
a set of reasons or a logical basis for a course of action or belief
Why
func
and notfn
? Why are declarationsvar type identifier
and notvar identifier type
? It’s just a design decision, I think.https://go.dev/ref/spec#Packages
The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that
rustc
does, this is not really how things play out. The article demonstrates bugs whichgo vet
can not find which are precluded by Rust’s language definition – that is real and substantive information.There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.
This isn’t a good article. To pick one point among many
It is definitely not correct — the read of pc.value needs to be guarded by the mutex, same as the write.
This is concurrency 101 stuff. And it’s totally fair to say that the subtlety here is difficult and worth solving at a language level! But if you’re going to offer a critique, I think you need to have a better understanding than what’s demonstrated here.
Why do you need to guard the read if your value is just an int? There’s no way you could get an int in an inconsistent/broken state
If you want your system to be sequentially consistent, then you need to ensure reads and writes are issued in the same order as if the program were run sequentially. In this case, if you don’t guard the read with a mutex, then your reads and writes could be reordered; a read may occur before a write even if it needed to occur after. For some applications this is fine (then your application does not need sequential consistency), but some form of locking must occur here to keep the operation sequentially consistent. An alternative (not a preferred one, just one) here would be to use a reader-writer lock. When the writer has this lock, readers are excluded from the critical section; multiple readers can share a lock but writers are blocked until readers release the lock.
Because the memory model of the language requires it. No operation is safe unless explicitly documented to be safe (i.e. sync/atomic). And there is no such thing as a benign data race ;)
Him describing using a trivial Go api which includes a code snippet at the top of the docs he linked:
Him describing using async rust:
The issue was the lack of a warning or error. Are you suggesting that using tokio in some way hides errors or warnings?
OP wrote the following code and complained that
other_stuff
wasn’t called.AFAIK, the halting problem hasn’t been solved.
The TL;DR goes like this:
I do like Rust but these kind of rants are really perpetuating the image of the snug rust developer. I know the rest of the community is friendlier but ouch that’s disappointing and somewhat harmful.
Also, wow so many unwrap-s in that beautiful and idiomatic Rust code… :p
Regarding the deadlock at the end, besides miri that may not be suitable for application code, are there other tools like linters or runtime check to be able to detect and debug these problems?
edit apparently there’s at least an experimental deadlock detector in the
parking_lot
library. Question still stands for other usecases.There’s some irony in the first example where the post opens talking about mistakes that the language can simply make impossibly by leaving features out, and then it talks about different problems caused by early returns. Turns out if your language disallows early returns, none of that nonsense is even possible.
I don’t think any languages I’ve used regularly have made that design choice, although it is one I’ve had enforced through static analysis on code bases I’ve worked on. Are you thinking of any off hand that do that? That’s not enough alone to make me switch to a new language, just curious to learn more.
Sure; almost every lisp works this way, as do Erlang and Elixir. I believe it’s true of OCaml too but I’m not positive. Probably Haskell and Forth, though you might get into an argument on what it even means for a return to be early in those languages.
Appreciate it, I haven’t used any of these beyond maybe a little fooling around with Haskell at one point, so its definitely a blind spot for me.
Lisp has block/return-from (not to mention tagbody/go), and scheme has call/cc. Forth permits first-class access to the return stack. Ocaml and haskell both have exceptions.
Common Lisp isn’t “most lisps”, and in context call/cc is not equivalent to early returns as pertains to the problem of unreachable code described in the context of the post.
It’s not? That’s certainly what it was meant to be (hence the name), and I think it was fairly successful.
Pascal doesn’t allow return early. I’m not sure about the follow on languages like Modula or Oberon.
Modula-2 allows early returns AFAIK.
This is a funny first example to give, since
cargo clippy
actually does catch this error when implementing the standardAdd
trait, which is what someone would normally do for a custom type (instead of just writing a bare function calledadd
).