Not just in such a specific case and not just in Go there is a great tendency among programmers to overuse special or new language features. More than once I have seen people replace everything that can be done with a feature (or a trick they learned about), regardless of whether it should be done. JavaScript sees this with nearly every new iteration, which often results in very inconsistent code bases, unless you rewrite everything with framework X anyways.
But it’s also not just JavaScript/ECMAScript or Go. I think the reason for this is that one wants to use up every bit of that excitement about a new feature or trick, ending up with slow, hard to understand and reason about code. I think most people have done that at least once in their programming career. I don’t mean just language features themselves, it can also be excessive use of object oriented features, sometimes in conjunction with some trick or pattern you learned. I think this might be good to really develop an understanding sometimes, but sometimes for the good of your future self make sure it actually makes sense. This article is a good example, but this applies to so much more.
Instead of falling back to a single goroutine the author could’ve used GOMAXPROCS=$(nproc) instead, such that for every routine there’s one processor core. The article correctly noted that the measurements were wrong because a lot of go routines were preempted and not scheduled back to finalize the timing measurement after the IO operations were done, probably because there was no free core available.
Does Go guarantee that if you have N procs and N cores, it will distribute one proc to each core with perfect affinity? You’d never end up pre-empting a proc in favor of another on the same core due to scheduling vagaries at the runtime and OS level?
Not just in such a specific case and not just in Go there is a great tendency among programmers to overuse special or new language features. More than once I have seen people replace everything that can be done with a feature (or a trick they learned about), regardless of whether it should be done. JavaScript sees this with nearly every new iteration, which often results in very inconsistent code bases, unless you rewrite everything with framework X anyways.
But it’s also not just JavaScript/ECMAScript or Go. I think the reason for this is that one wants to use up every bit of that excitement about a new feature or trick, ending up with slow, hard to understand and reason about code. I think most people have done that at least once in their programming career. I don’t mean just language features themselves, it can also be excessive use of object oriented features, sometimes in conjunction with some trick or pattern you learned. I think this might be good to really develop an understanding sometimes, but sometimes for the good of your future self make sure it actually makes sense. This article is a good example, but this applies to so much more.
Instead of falling back to a single goroutine the author could’ve used
GOMAXPROCS=$(nproc)
instead, such that for every routine there’s one processor core. The article correctly noted that the measurements were wrong because a lot of go routines were preempted and not scheduled back to finalize the timing measurement after the IO operations were done, probably because there was no free core available.Edit: typo
Does Go guarantee that if you have N procs and N cores, it will distribute one proc to each core with perfect affinity? You’d never end up pre-empting a proc in favor of another on the same core due to scheduling vagaries at the runtime and OS level?
The Go runtime explicitly does not provide any goroutine/OS thread affinity guarantees unless you call runtime.LockOSThread.