we talk about programming like it is about writing code, but the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.
While engineers’ takes on social issues are always… less than ideal, the fact that it’s not even possible to include arguments from more qualified people or discuss high-quality content reinforces the inability of engineers to understand and discuss these issues and normalizes the ideology that “technology is not political”.
Or, you know, going to your local lake and pestering that one lone guy with his fishing rod about the global disaster of mega-trawlers decimating the oceans’ fish population is not going to win you a new ally.
There is a time and place for everything.
reinforces the inability of engineers to understand and discuss these issues
I find it highly questionable that you go from “people here don’t want to focus on discussing politics in this place” (fact) to “engineers are mentally disadvantaged in understanding political issues” (insult).
If you hold people here in such low esteem, why not build your own debate place with the focus and the audience you prefer?
I’ve never worked on a project complicated enough to need something like Ninja, but I appreciated the humble, down-to-earth tone here. This part especially struck me, about being a maintainer:
A different source of sadness were the friendly and intelligent people who made reasonable-seeming contributions that conflicted with my design goals, where I wanted to repay their effort with a thorough explanation about why I was turning them down, and doing that was itself exhausting.
I really liked the article too, and learned a bunch of things from it, but I would quibble with this one part:
People repeatedly threatened to fork the project when I didn’t agree to their demands, never once considering the possibility that I had more context on the design space than they did.
Forking is a feature, not a bug. Forks are experiments, and can be merged, and that has happened many times (XEmacs, etc.)
So if I were the maintainer, I would encourage all the dissenters to fork (as well as rewrite, which he mentioned several people did). Ninja is used by Chrome so it’s not like its core purpose will be diluted.
Of course acting like it’s a “threat” to fork is silly … the forkers will quickly find out that this just means they have a bunch more work to do that the maintainers previously did for them.
So in other words, if you don’t want users to treat the project as a product with support, then don’t look down upon forks? That is pretty much the way I think about Oil. It’s a fairly opinionated project, and I don’t discourage forks (but so far there’s been no reason to).
I would use the forks as a sign that people actually care enough to put their own skin in the game… and figure out a way to merge the changes back compatibility after a period of research/development (or not, if it is deemed out of scope).
Anyway, I think Ninja is a great project, and I hope to switch Oil to it in the near future for developer builds (distro builds can just use a non-incremental shell script). A couple years ago, I wrote a whole bunch of GNU make from scratch and saw all its problems first hand. Oil has many different configurations, many steps that could be executed in parallel, and many different styles of tools it invokes (caused by DSLs / codegen). So GNU make falls down pretty hard for this set of problems.
I think it really depends on the fork itself. Experimental forks are great, but often when someone “threatens” it, they also intend on trying to pull part of the community off with them and may not have any intention on ever giving back anything. One example of this was ffmpeg vs libav, where the latter was “hostile fork” that caused all sorts of general trouble (remember when running ffmpeg on Debian said the command was deprecated), and even though it eventually died off in popularity, it didn’t happen soon enough to avoid all sorts of nasty drama.
If you don’t plan to support the project, the prospect of others pulling part of the community off should feel like a relief.
I agree with you that it is possible for forking to be a threat, but it’s usually just the way it’s said: do this or else. (And usually it’s an empty threat. Someone clueless enough to consider it a threat is usually not actually planning to follow through.)
But a polite heads-up that one intends to fork a project should always be cause for relief, in working out some unresolved tension in the community. Taking part of the community away is the whole point of a fork. If the new fork doesn’t intend to support a part of the community they wouldn’t be talking about it. And if people didn’t try it out, it wouldn’t be an experiment.
Yeah that is one fork I remember being surprised by since I’m a Debian/Ubuntu user… But I would still say the occasional fork is evidence of the system working as intended. There was a slight inconvenience to me, but that doesn’t outweigh improving the overall health of the ecosystem through a little competition.
Some people may do it in bad faith, but it doesn’t change the principle. It’s hard to see any instance where forking made things worse permanently, where I can see many cases where the INABILITY to fork (e.g. closed source software) made things worse forever.
e.g. Earlier versions of Winamp were much better. Same with a lot of Microsoft products. If it were open source then there would be no need for “oldversion.com” (and there is no none AFAIK). The offending new features can be modularized / optimized. There are some open source releases that are bungled, but outsiders are often able to fix them with patches or complaints.
[OP here] Thanks for this comment, it is very insightful.
Upon reflecting after writing the post I came to the same conclusion as you, that I should have encouraged forks more as a way to offload responsibility. I think at the time I was more excited about “fame” or whatever for my project, and now that I’m on the other side of it I realize that it wasn’t worth it. I have a similar thing with my work at Google – a younger me wanted to share it all with the world, but these days I am relieved when only people within Google can ask me about it.
Unfortunately forks are not as free as we’d like. Imagine someone makes a fork that adds some command-line flag they want, and then some random package starts depending on that; now users are confused about which fork to use, contributors are split, Debian has to decide whether to package both forks, and so on.
I think I wouldn’t mind forks if they were about meaningful changes, like the person who wanted to make Ninja memory resident to make it scale to their problem space. Especially when you’re making an app that works on multiple platforms, I’ve sometimes wondered if the best way to maintain it is via mutually communicating forks (e.g. a new release on Linux means that the Windows fork can adapt which changes are relevant to it). It’s the forks about trivialities that are frustrating, and in particular because in the context of a trivial change the word “fork” is brought up not in the way you intend, but rather just as a rhetorical weapon.
Yeah it’s a tough issue for sure. I think different names are important, so a publicly distributed fork of ninja shouldn’t be called ninja. That way it can support different flags, or even a different input language.
That seems to be respected by most forkers: libav != ffmpeg, and emacs != xemacs, etc. The distro issue is also tricky, as Debian switched to libav and then back to ffmpeg. IMO they should try to make packages “append only”, but I understand that curation also adds some value.
But I’d say the people who actually would follow through on a fork rather are the least of the problem. Of course it’s not easy to tell who those people are to begin with. Those are people who have the technical expertise to help the project too!
Another good example of a fork is Neovim. In this video one of the primary contributors to Neovim shows some pretty good evidence that it has met user demand, and also motivated vim author Bram Moolenaar to add features that he resisted for many years!
It may not have been pleasant for Bram to have his work criticized, but I think it’s a healthy criticism when someone puts in work, rather than low effort, uneducated flaming.
(I’m still a Vim user, and haven’t tried Neovim, but I appreciate the experimentation. And honestly I learned a whole bunch of things about Vim internals from that talk, despite having used it for 15 years. It’s good to have new eyes on old code.)
I should also mention that I think the Elm project could save themselves a lot of hassle and drama by respecting several open source norms:
If you don’t want people to treat your project like a product, don’t engage in “marketing”!
There’s too much marketing on their home page, in a way that appear to elevate the authors/maintainers above the users: https://elm-lang.org/
It looks like a product. In contrast, peers don’t market to each other. Instead they tell people what they did in plain language. Sometimes that involves teaching and I’ve found that people respect that. But there is a lot of marketing that’s not teaching.
SImilarly, put the limitations up front and center. WE BREAK CODE THAT WORKED. That is perfectly within your right to do, as long as you clearly state that you violate that norm. I wrote a couple years ago that they need a FAQ here: http://faq.elm-community.org/ about that, since it appears to literally be the #1 FAQ, but it’s not mentioned anywhere.
I think the Elm project is trying to forge a new form of financing code production, midway between outright corporatization and open source “share cropping”.
Leaving aside their internal politics, I think something like that is worth exploring and maybe required for the long term health of the space.
The tone is similarly down-to-earth and there’s a bit more in-depth technical content on how it was optimised. It made an impression on me during a recent re-read as one of the better essays in an overall excellent book.
This is true but sad as well, I’ve seen this so many times on all kinds of different free software projects.
But today I see that free software is not really about sharing between equals anymore; people instead think of themselves as customers and treat authors as if they can go complain to the manager.
Some pieces of Ninja took struggle to get to and then are obvious in retrospect. I think this is true of much of math, that once you have distilled the ideas to their essence they seem obvious. The power comes from having the right way of thinking about the problem.
I never understood the advantages of ninja with respect to make. It seems to boil down to things like that the makefiles do not use tab characters with semantic value, that the -j option is given by default, or that the syntax is simpler and slightly better. But apart from that, what are the essential improvements that would justify a change from make to ninja? If ninja is slightly better than GNU make, I tend to prefer GNU make that I know and that it is ubiquitous and it avoids a new build dependency.
The article discusses how it’s really a low-level execution engine for build systems like CMake, Meson, and the Chrome build system (formerly gyp, now GN).
So it’s much simpler than Make, faster than Make, and overlapping with the “bottom half” of Make. This sentence is a good summary of the problems with Make:
Ninja’s closest relative is Make, which attempts to encompass all of this programmer-facing functionality (with globbing, variable expansions, substringing, functions, etc.) that resulted in a programming language that was too weak to express all the needed features (witness autotools) but still strong enough to let people write slow Makefiles. This is vaguely Greenspun’s tenth rule, which I strongly attempted to avoid in Ninja.
FWIW as he also mentions in the article, Ninja is for big build problems, not necessarily small ones. The Android platform build system used to be written in 250K lines of GNU Make, using the “GNU Make Standard Library” (a third-party library), which as far as I remember used a Lisp-like encoding of Peano numbers for arithmetic …
# ###########################################################################
# ARITHMETIC LIBRARY
# ###########################################################################
# Integers a represented by lists with the equivalent number of x's.
# For example the number 4 is x x x x.
# ----------------------------------------------------------------------------
# Function: int_decode
# Arguments: 1: A number of x's representation
# Returns: Returns the integer for human consumption that is represented
# by the string of x's
# ----------------------------------------------------------------------------
int_decode = $(__gmsl_tr1)$(if $1,$(if $(call seq,$(word 1,$1),x),$(words $1),$1),0)
# ----------------------------------------------------------------------------
# Function: int_encode
# Arguments: 1: A number in human-readable integer form
# Returns: Returns the integer encoded as a string of x's
# ----------------------------------------------------------------------------
__int_encode = $(if $1,$(if $(call seq,$(words $(wordlist 1,$1,$2)),$1),$(wordlist 1,$1,$2),$(call __int_encode,$1,$(if $2,$2 $2,x))))
__strip_leading_zero = $(if $1,$(if $(call seq,$(patsubst 0%,%,$1),$1),$1,$(call __strip_leading_zero,$(patsubst 0%,%,$1))),0)
int_encode = $(__gmsl_tr1)$(call __int_encode,$(call __strip_leading_zero,$1))
Yes, so I guess its main advantage is that it is really scalable. This is not a problem that I have ever experience, my largest project having two hundred files that compiled in a few seconds, and the time spent by make itself was negligible. On the other hand, for such a small project you may get to enjoy the ad-hoc GNU make features, like the implicit compilation .c -> .o, the usage of CFLAGS and LDFLAGS variables, and so on. You can often write a makefile in three or four lines that compiles your project; I guess with ninja you should be much more verbose and explicit.
He mentions that the readme explicitly discourages people with small projects from using it.
I suspect it’s more that ninja could help you avoid having to add the whole disaster that is autotools to a make-based build rather than replacing make itself.
I suspect it’s more that ninja could help you avoid having to add the whole disaster that is autotools to a make-based build rather than replacing make itself.
Sure; autotools is a complete disaster and a really sad thing (and the same thing can be said about cmake). For small projects with few and non-configurable dependencies, it is actually feasible to write a makefile that will compile seamlessly the same code on linux and macos. And, if you don’t care that windows users can compile it themselves, you can even cross-compile a binary for windows as a target for a compiler in linux.
You don’t (or better, shouldn’t!) write Ninja build descriptions by hand. The whole idea is something like CMake generates what Ninja actually parses. I’ve written maybe 3 ninja backends by now.
we use ninja in pytype, where we need to create a dependency tree of a project, and then process the files leaves-upwards with each node depending on the output of processing its children as inputs. this was originally done within pytype by traversing the tree a node at a time; when we wanted to parallelise it we decided to instead generate a ninja file and have it invoke a process on each file, figuring out what could be done in parallel.
we could doubtless have done the same thing in make with a bit of time and trouble, but ninja’s design decisions of separating the action graph out cleanly and of having the build files be easy to machine generate made the process painless.
It’s faster. I am (often) on Windows, where the difference can feel substantial. The Meson site has some performance comparisons and mentions: “On desktop machines Ninja based build systems are 10-20% faster than Make based ones”.
This is a trenchant remark.
and still it’s forbidden to speak about “non-technical” topics on Lobste.rs
Yep. Perhaps for the same reasons why “just buy the Daily Mail” is not a sensible answer to “I want to practice writing poems”.
While engineers’ takes on social issues are always… less than ideal, the fact that it’s not even possible to include arguments from more qualified people or discuss high-quality content reinforces the inability of engineers to understand and discuss these issues and normalizes the ideology that “technology is not political”.
Or, you know, going to your local lake and pestering that one lone guy with his fishing rod about the global disaster of mega-trawlers decimating the oceans’ fish population is not going to win you a new ally.
There is a time and place for everything.
I find it highly questionable that you go from “people here don’t want to focus on discussing politics in this place” (fact) to “engineers are mentally disadvantaged in understanding political issues” (insult).
If you hold people here in such low esteem, why not build your own debate place with the focus and the audience you prefer?
I did it. I’m one of the maintainers of gambe.ro, lobsters’ sister website with extra politics (and in Italian).
I think that in this case the author was referring to the social aspects of open source work, expanded on in the later parts of the article.
I’ve never worked on a project complicated enough to need something like Ninja, but I appreciated the humble, down-to-earth tone here. This part especially struck me, about being a maintainer:
I really liked the article too, and learned a bunch of things from it, but I would quibble with this one part:
Forking is a feature, not a bug. Forks are experiments, and can be merged, and that has happened many times (XEmacs, etc.)
So if I were the maintainer, I would encourage all the dissenters to fork (as well as rewrite, which he mentioned several people did). Ninja is used by Chrome so it’s not like its core purpose will be diluted.
Of course acting like it’s a “threat” to fork is silly … the forkers will quickly find out that this just means they have a bunch more work to do that the maintainers previously did for them.
So in other words, if you don’t want users to treat the project as a product with support, then don’t look down upon forks? That is pretty much the way I think about Oil. It’s a fairly opinionated project, and I don’t discourage forks (but so far there’s been no reason to).
I would use the forks as a sign that people actually care enough to put their own skin in the game… and figure out a way to merge the changes back compatibility after a period of research/development (or not, if it is deemed out of scope).
Anyway, I think Ninja is a great project, and I hope to switch Oil to it in the near future for developer builds (distro builds can just use a non-incremental shell script). A couple years ago, I wrote a whole bunch of GNU make from scratch and saw all its problems first hand. Oil has many different configurations, many steps that could be executed in parallel, and many different styles of tools it invokes (caused by DSLs / codegen). So GNU make falls down pretty hard for this set of problems.
I think it really depends on the fork itself. Experimental forks are great, but often when someone “threatens” it, they also intend on trying to pull part of the community off with them and may not have any intention on ever giving back anything. One example of this was ffmpeg vs libav, where the latter was “hostile fork” that caused all sorts of general trouble (remember when running ffmpeg on Debian said the command was deprecated), and even though it eventually died off in popularity, it didn’t happen soon enough to avoid all sorts of nasty drama.
If you don’t plan to support the project, the prospect of others pulling part of the community off should feel like a relief.
I agree with you that it is possible for forking to be a threat, but it’s usually just the way it’s said: do this or else. (And usually it’s an empty threat. Someone clueless enough to consider it a threat is usually not actually planning to follow through.)
But a polite heads-up that one intends to fork a project should always be cause for relief, in working out some unresolved tension in the community. Taking part of the community away is the whole point of a fork. If the new fork doesn’t intend to support a part of the community they wouldn’t be talking about it. And if people didn’t try it out, it wouldn’t be an experiment.
Yeah that is one fork I remember being surprised by since I’m a Debian/Ubuntu user… But I would still say the occasional fork is evidence of the system working as intended. There was a slight inconvenience to me, but that doesn’t outweigh improving the overall health of the ecosystem through a little competition.
Some people may do it in bad faith, but it doesn’t change the principle. It’s hard to see any instance where forking made things worse permanently, where I can see many cases where the INABILITY to fork (e.g. closed source software) made things worse forever.
e.g. when I used windows I used to use http://www.oldversion.com/
e.g. Earlier versions of Winamp were much better. Same with a lot of Microsoft products. If it were open source then there would be no need for “oldversion.com” (and there is no none AFAIK). The offending new features can be modularized / optimized. There are some open source releases that are bungled, but outsiders are often able to fix them with patches or complaints.
[OP here] Thanks for this comment, it is very insightful.
Upon reflecting after writing the post I came to the same conclusion as you, that I should have encouraged forks more as a way to offload responsibility. I think at the time I was more excited about “fame” or whatever for my project, and now that I’m on the other side of it I realize that it wasn’t worth it. I have a similar thing with my work at Google – a younger me wanted to share it all with the world, but these days I am relieved when only people within Google can ask me about it.
Unfortunately forks are not as free as we’d like. Imagine someone makes a fork that adds some command-line flag they want, and then some random package starts depending on that; now users are confused about which fork to use, contributors are split, Debian has to decide whether to package both forks, and so on.
I think I wouldn’t mind forks if they were about meaningful changes, like the person who wanted to make Ninja memory resident to make it scale to their problem space. Especially when you’re making an app that works on multiple platforms, I’ve sometimes wondered if the best way to maintain it is via mutually communicating forks (e.g. a new release on Linux means that the Windows fork can adapt which changes are relevant to it). It’s the forks about trivialities that are frustrating, and in particular because in the context of a trivial change the word “fork” is brought up not in the way you intend, but rather just as a rhetorical weapon.
Yeah it’s a tough issue for sure. I think different names are important, so a publicly distributed fork of ninja shouldn’t be called ninja. That way it can support different flags, or even a different input language.
That seems to be respected by most forkers: libav != ffmpeg, and emacs != xemacs, etc. The distro issue is also tricky, as Debian switched to libav and then back to ffmpeg. IMO they should try to make packages “append only”, but I understand that curation also adds some value.
But I’d say the people who actually would follow through on a fork rather are the least of the problem. Of course it’s not easy to tell who those people are to begin with. Those are people who have the technical expertise to help the project too!
Another good example of a fork is Neovim. In this video one of the primary contributors to Neovim shows some pretty good evidence that it has met user demand, and also motivated vim author Bram Moolenaar to add features that he resisted for many years!
https://vimconf.org/2019/slides/justin.pdf
https://www.youtube.com/watch?v=Bt-vmPC_-Ho
It may not have been pleasant for Bram to have his work criticized, but I think it’s a healthy criticism when someone puts in work, rather than low effort, uneducated flaming.
(I’m still a Vim user, and haven’t tried Neovim, but I appreciate the experimentation. And honestly I learned a whole bunch of things about Vim internals from that talk, despite having used it for 15 years. It’s good to have new eyes on old code.)
I should also mention that I think the Elm project could save themselves a lot of hassle and drama by respecting several open source norms:
There’s too much marketing on their home page, in a way that appear to elevate the authors/maintainers above the users: https://elm-lang.org/
It looks like a product. In contrast, peers don’t market to each other. Instead they tell people what they did in plain language. Sometimes that involves teaching and I’ve found that people respect that. But there is a lot of marketing that’s not teaching.
SImilarly, put the limitations up front and center. WE BREAK CODE THAT WORKED. That is perfectly within your right to do, as long as you clearly state that you violate that norm. I wrote a couple years ago that they need a FAQ here: http://faq.elm-community.org/ about that, since it appears to literally be the #1 FAQ, but it’s not mentioned anywhere.
Don’t be hostile to forking. This is mentioned here: https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/#forkability
The author of that post may have been unreasonable in other respects, but I do agree about forking.
So while I think the talk you linked is thoughtful (I watched it awhile ago), I think the project is suffering from some self-inflicted pain…
I think the Elm project is trying to forge a new form of financing code production, midway between outright corporatization and open source “share cropping”.
Leaving aside their internal politics, I think something like that is worth exploring and maybe required for the long term health of the space.
Likewise!
Although this is older now and some sections may be out of date, the author also has an essay on ninja in The Performance of Open Source Applications which is well worth reading.
The tone is similarly down-to-earth and there’s a bit more in-depth technical content on how it was optimised. It made an impression on me during a recent re-read as one of the better essays in an overall excellent book.
This is true but sad as well, I’ve seen this so many times on all kinds of different free software projects.
Lots of gems in here; this quote stood out to me.
I never understood the advantages of ninja with respect to make. It seems to boil down to things like that the makefiles do not use tab characters with semantic value, that the -j option is given by default, or that the syntax is simpler and slightly better. But apart from that, what are the essential improvements that would justify a change from make to ninja? If ninja is slightly better than GNU make, I tend to prefer GNU make that I know and that it is ubiquitous and it avoids a new build dependency.
The article discusses how it’s really a low-level execution engine for build systems like CMake, Meson, and the Chrome build system (formerly gyp, now GN).
So it’s much simpler than Make, faster than Make, and overlapping with the “bottom half” of Make. This sentence is a good summary of the problems with Make:
FWIW as he also mentions in the article, Ninja is for big build problems, not necessarily small ones. The Android platform build system used to be written in 250K lines of GNU Make, using the “GNU Make Standard Library” (a third-party library), which as far as I remember used a Lisp-like encoding of Peano numbers for arithmetic …
Source, please? I can’t wait to see what other awful things it does.
https://sourceforge.net/projects/gmsl/files/GNU%20Make%20Standard%20Library/v1.1.9/
Yup exactly, although the representation looks flat, it uses recursion to turn
4
intox x x x
! The__int_encode
function is recursive.It’s what you would do in Lisp if you didn’t have integers. You would make integers out of cons cells, and traverse them recursively.
So it’s more like literally Greenspun’s tenth rule, rather than “vaguely” !!!
Yes, so I guess its main advantage is that it is really scalable. This is not a problem that I have ever experience, my largest project having two hundred files that compiled in a few seconds, and the time spent by make itself was negligible. On the other hand, for such a small project you may get to enjoy the ad-hoc GNU make features, like the implicit compilation .c -> .o, the usage of CFLAGS and LDFLAGS variables, and so on. You can often write a makefile in three or four lines that compiles your project; I guess with ninja you should be much more verbose and explicit.
He mentions that the readme explicitly discourages people with small projects from using it.
I suspect it’s more that ninja could help you avoid having to add the whole disaster that is autotools to a make-based build rather than replacing make itself.
Sure; autotools is a complete disaster and a really sad thing (and the same thing can be said about cmake). For small projects with few and non-configurable dependencies, it is actually feasible to write a makefile that will compile seamlessly the same code on linux and macos. And, if you don’t care that windows users can compile it themselves, you can even cross-compile a binary for windows as a target for a compiler in linux.
You don’t (or better, shouldn’t!) write Ninja build descriptions by hand. The whole idea is something like CMake generates what Ninja actually parses. I’ve written maybe 3 ninja backends by now.
we use ninja in pytype, where we need to create a dependency tree of a project, and then process the files leaves-upwards with each node depending on the output of processing its children as inputs. this was originally done within pytype by traversing the tree a node at a time; when we wanted to parallelise it we decided to instead generate a ninja file and have it invoke a process on each file, figuring out what could be done in parallel.
we could doubtless have done the same thing in make with a bit of time and trouble, but ninja’s design decisions of separating the action graph out cleanly and of having the build files be easy to machine generate made the process painless.
It’s faster. I am (often) on Windows, where the difference can feel substantial. The Meson site has some performance comparisons and mentions: “On desktop machines Ninja based build systems are 10-20% faster than Make based ones”.
I use it in all my recent projects because it can parse cl.exe /showincludes
But generally like andyc already said, it’s just a really good implementation of make