This article is full of misinformation. I posted details on HN: https://news.ycombinator.com/item?id=26834128.
This really shouldn’t be needed, and even someone without any exposure to Go can see this is just bunk with the minimal application of critical thinking. It’s sad to see this so highly upvoted on HN.
When I was in high school one of the my classmates ended up with a 17A doorbell in some calculations. I think he used the wrong formula or swapped some numbers; a simple mistake we all made. The teacher, quite rightfully, berated him for not actually looking at the result of his calculation and judging if it’s roughly in the right ballpark. 17A is a ludicrous amount of power for a doorbell and anyone can see that’s just spectacularly wrong. The highest rated domestic fuses we have are 16A.
If this story had ended up with 0.7%, sure, I can believe that. 7%? Very unlikely and I’d be skeptical, but still possible I suppose. 70% Yeah nah, that’s just as silly as a 17A doorbell. The author should have seen this, and so should anyone reading this, with or without exposure to Go. This is just basic critical thinking 101.
Besides, does the author think the Go authors are stupid blubbering idiots who someone missed this huge elephant-sized low-hanging fruit? Binary sizes have been a point of attention for years, and somehow missing 70% wasted space of “dark bytes” would be staggeringly incompetent. If Go was written by a single author then I suppose it would have been possible (though still unlikely), but an entire team missing this for years?
Everything about this story is just stupid. I actually read it twice because surely someone can’t make such a ludicrous claim with such confidence, on the cockroachdb website no less? I must be misunderstanding it? But yup, it’s really right there. In bold even.
I think this is really interesting from a project management and public perception point of view. This is slightly different from your high school classmate, because they might not have been aware the ridiculousness of their claims. Of course, this situation could be the same, but I think it is more interesting if we assume the author did see this number and thought it was ridiculous and still wrote the article anyway.
Someone doesn’t write a post like this without feeling some sort of distrust to the tool they are using. For some reason, once you’ve lost the trust, people will start making outlandish claims without giving any benefit of the doubt. I feel like this is similar to the Python drama which ousted the BDFL and to Rust’s actix-web drama which ousted the founding developer. Once the trust is lost in whoever is making the decisions, logic and reason seem to just go out the window. Unfortunately this can lead to snowballing and people acting very nasty for no real reason.
I don’t have much knowledge of the Go community or drama, and in some sense this is at least much more nicely put than some of Rust’s actix-web drama (which really threw good intent out the window), but I’d be curious to know what happened that lost the trust here. It might be as simple as being upfront about the steps being done to reduce binary size, even if they are not impactful, that might gain back trust in this area.
It’s my impression that the Python and actix-web conflicts were quite different; with Python Guido just quit as he got tired of all the bickering, and actix-web was more or less similar (AFAIK neither were “ousted” though, but quit on their own?) I only followed those things at a distance, but that’s the impression I had anyway.
But I think you may be correct with lack of trust – especially when taking the author’s comments on the HN story in to account – but it’s hard to say for sure though as I don’t know the author.
Perhaps I am over-generalizing, but I think they are all the same thing. With Rust’s actix-web it essentially boiled down to some people have a mental model of Rust which involves no unsafe code (which differed from the primary developer’s mental model). At some point, this went from “lets minimize unsafe” to “any unsafe is horrible and makes the project and developer a failure”, regardless of the validity of the unsafe statements. Unfortunately it devolved to the point where the main developer left.
In the Go situation it seems very similar. Some people have a mental model that any binary bloat is unacceptable, while the core devs see the situation differently (obviously balancing many different requirements). It seems like this article is that disagreement boiling over to the point where any unaccounted-for bits in a binary are completely unacceptable, leading to outlandish claims like 70% of the binary is wasted space. Hopefully no Go core developers take this personally enough to leave, but it seems like a very similar situation where different mental models and lack of trust lead to logic and benefit of the doubt getting thrown out the window.
It is hard to say what is going on for sure, and in many ways I’m just being an armchair psycologist with no degree, but I think it is interesting how this is a “common” trend. At some point projects that are doing a balancing act get lashed out at for perceived imbalances being misconstrued as malicious intent.
I don’t think you’re correctly characterizing the actix situation. I think the mental model was “no unnecessary unsafe”. There were some spots where the use of unsafe was correct but unnecessary, and others where it was incorrect and dangerous. I think there was poor behavior on both sides of that situation. The maintainer consistently minimized the dangerous uses and closed issues, meanwhile a bunch of people on the periphery acted like a mob of children and just kept piling on the issues. I personally think someone should have forked it and moved on with their lives instead of trying to convince the maintainer to the point of harassment.
on the cockroachdb website no less
on the cockroachdb website no less
Cockroachdb is on my list to play with on a rainy afternoon, but this article did knock it down the list quite a few notches.
We use it as our main database at work and it’s pretty solid. The docs for it are pretty good as well. But I definitely agree, this is a pretty disappointing article.
Hmm none of the flag options are “this article contains blatantly false or misleading information”.
I haven’t read rsc’s critique of the article yet, but while reading the tables I’m thinking, why are they comparing different CockroachDB versions? Surely, if the point was to find bloat over time due to changes in Go, one would want to look at the same program across Go versions?
I took the time to do my best to compile an older and a newer version of the code with many different Go versions and here’s what I got:
1.8 58,099,688 n/a
1.9 57,897,616 314,191,032
1.10 57,722,520 313,669,616
1.11 48,961,712 233,170,304
1.12 52,440,168 236,192,600
1.13 50,844,048 214,373,144
1.14 50,527,320 212,699,656
1.15 47,910,360 201,391,416
1.16 47,317,624 205,018,136
I couldn’t compile v20.2.0 on go1.8 because of the large use of type aliases that would take a very long time to remove. I already had to make some other smaller changes or backports of standard library code to have it compile, and type aliases crossed the line for me.
The overall binary size has decreased by 20% to 33% which means that all of this “dark code” is just their inability to understand their binary, and all of the growth is due to their code base.
I have no idea how much they make up of the resulting binary, but I’d be surprised if it was negligible.
Most likely the current version doesn’t build with old Go. Probably there exists an older version that builds across all Go versions, but then the numbers wouldn’t be as “impressive” :-P
Yes this is the usual reason. When I benchmarked how Rust compilation speed changed over the years, it was tricky to find a library that compiled down to Rust 1.0. I ended up using an old version of library to parse URL.