I’ve been working on a follow-up to a discussion I had on how GNU’s implementation of yes had a higher throughput than any other implementation from their use of buffering two pages of "yes\n". (Reddit post on r/unix, lobsters)
I’m interested now in benching the speed of a virtual terminal, hopefully it’ll be ready by the end of this week!
I’ve only read Masters of Doom this year, I haven’t been reading much this year. I think someone else recommended this book here on lobsters, and I would very strongly recommend it, it goes behind the scenes of Doom’s development and id Software’s internal structure and how it influenced the games id developed.
It’s about 350 pages, I think I finished it in about a week.
Finally getting around to finishing Masters of Doom. Excellent story about Doom, id Software, some of the engine technology, and the culture surrounding all of it.
What do you think of it? It’s been on my reading list for years.
I loved it. Very engaging story and well-written. I don’t think anyone interested in the subject matter would be disappointed with the book.
This is so cool! I really like the structure of this post: recognizing something one person has done well (and therefore other people have failed) and then explaining it
optimizing to the extreme for fun is kind of interesting, but to do it at the expense of clarity with nothing really to gain seems like a loss.
I really don’t like GNU’s implementation, NetBSD and COHERENT seem to have the most readable yes out of all the yesses I looked over (BusyBox had the worst).
It may be possible to apply this to other utilities like dd and cat, which I plan to look into soon (unless someone else beats me!).
Who on Earth thinks that BusyBox thing is a good idea? I’d hate to see anything even remotely complicated from whomever wrote that.
It’s super compact both in code size and resource consumption (one stack variable!!), and it’s still relatively easy to understand. I’d say it’s doing its job marvellously.
Havent had time to look at the code, but alpine linux uses it by default.
And it’s targeted mostly to embeded linux, so I’m guessing ultra optimization is more important to them than readability in this case.
Yeah, that isn’t cool. I thought they were just trying to avoid reusing a variable, then I realised they were reusing a variable, and/or moving on to argv :(
with nothing really to gain
with nothing really to gain
One poster on Hacker News suggested this: https://news.ycombinator.com/item?id=14543640
Classic HN. Always reject the mundane explanation that the program is fast because somebody wanted it to go fast in favor of a narrative involving an epic struggle against corporate overlords.
Check the thread again, GNU explicitly asks people to do this: https://www.gnu.org/prep/standards/standards.html#Reading-Non_002dFree-Code
So why did they wait so long to make this change?
I’m rejecting your characterization of that HN comment, because this is a common method for GNU programs. I am not rejecting your assessment of why it changed though.
This wasn’t done “with nothing really to gain” (although the gain might be subjective). It was performed as a reaction to a filed bug:
Interesting, I wonder what the backstory to that is. The example is oddly specific enough (involving a pipeline of yes, echo, a shell range expansion, head, and md5sum), that it look like an unexpected slowdown someone actually ran into in practice, vs. just a bored person benchmarking yes.
If “yes” was written once, decades ago, and someone spent all of one entire week validating, I’m ok with getting a 10x performance increase on every *nix system in existence ongoing.
I love it when pipelines/shell scripts can scale vertically for a long time before having to rewrite in some native language.