1. 6
  1.  

  2. 9

    I believe the premise “free software is written by solo devs” is wrong. Free software is written by solo devs and driveby contributors. As driveby contributors do not have any knowledge of all the invariants the software has to maintain, it makes sense to use a programming language with a very strong ability to specify and check invariants.

    In conclusion, Ada+Spark, Rust and Coq are the ideal programming languages to write free software. But hold on, isn’t it a bit weird that I came to this conclusion and that these languages also happen to be among the ones I like/admire the most, just like the author likes Raku and came to the conclusion that Raku was the ideal programming language for writing Free Software? Maybe there’s no actual ideal programming language for writing free software and we’re just trying to rationalize our tastes? ;)

    1. 2

      (Disclaimer: video author)

      But hold on, isn’t it a bit weird that I came to this conclusion and that these languages also happen to be among the ones I like/admire the most, just like the author likes Raku and came to the conclusion that Raku was the ideal programming language for writing Free Software?

      That’s an important criticism – motivated reasoning is certainly a real problem, and one that’s difficult to guard against.

      If it helps any, I can tell you that I started writing Raku because I viewed it as a really good fit for solo projects; that is, I identified those advantages from the outside, before I was invested in the language. (After a period where I was mostly writing Rust, I wanted to add a dynamic language with a focus on developer productivity, especially for solo work; other top contenders included Racket, Common Lisp, Clojure, Dyalog APL, and Elixir.)

      I believe the premise “free software is written by solo devs” is wrong. Free software is written by solo devs and driveby contributors.

      I mean, I don’t disagree – I’m a frequent driveby contributor and, as I mentioned in the talk, I submitted a pull request in the process of writing the presentation. I submitted that PR because Impress.js is written in JavaScript, which I already know; if I’d been generating my presentation using Pandoc, I probably would not have sent in any PRs, since I don’t know Haskell. So it’s clearly true that a project can get more driveby contributions if it’s written in a well known language.

      My point, however, is that this isn’t worth optimizing for. Again, look at Impress.js. It represents something of a best-case scenario for driveby contributors: it’s written in the most widely known programming language, it doesn’t use a framework or other toolset with that some programmers won’t know, and it’s extremely well-documented. It even has a plugin system, to make writing third-party code easier. But even with all those advantages, the two maintainers are responsible for > 50% of all commits. If we exclude commits that add three or fewer lines (mostly commits that don’t depend on the programming language, such as fixing typos in the docs or adding a link to another example project), then the maintainers are responsible for > 70% of the commits. I haven’t done the math to weight commits by size, but I imagine that would put the maintainers even further ahead. And, again, this is a best case scenario for driveby commits.

      My claim is that open source projects would be better off prioritizing the 70 (80? 90?) percent of the code that comes from the project’s main developers over the 30/20/10 percent from driveby contributors.

      1. 1

        I think the conclusion was wrong too. The bus factor was informative though.

        In conclusion, Ada+Spark, Rust and Coq are the ideal programming languages to write free software.

        Why not ? If the software proves valuable, people will learn Ada to maintain it. If it proves to be only useful for you, you are still in a better place for having written it in Ada. Game programmers work with assembly to add extensions to their favorite games released in the 90s.

        Unlike corporate drones you can optimize free software for personal happiness.

        I think the best way to increase free software usage would be by providing an extension model like wordpress, rather than a dialecting programming language. Plugins can also be written in any language if you use REST APIs.

      2. 2

        I can’t imagine what argument would convince me that the license of your artefact should in any way inform the choice of implementation language, but I’d love to hear anybody try.

        1. 6

          The basic idea is:

          • Most FOSS projects are created by one or a small number of people and not large teams
          • “Corporate” languages (Golang, Rust, Typescript) are backed by large organizations
          • Large organizations tend to have high turn over and large code bases that can’t fit into any one persons head, so they optimize for the following major priorities:
            • discouragement of “dialects” and encouragement of a consistent or standard way of expression
            • explicit and verbose code
            • “quick to pick up” languages whose focus is less on mastery than consistency of expression
          • One or few person teams tend to not have high turn over and work on projects without major funding which favors the priorities of:
            • languages that have high expressiveness and rewards mastery
            • communities that are “friendly”
            • languages that are “fun”

          To me, this is kind of talking about corporate vs. “DIY” (or “artisinal/amateur/journeyperson”) and it so happens that most FOSS projects are sole developers or a small number of people. As such, the FOSS projects will presumably favor languages that allow ‘shortcuts’, high levels of expressiveness, perhaps a DSL. Sole developers also won’t be as willing to put up with a toxic community.

          In a corporate context, consistency is much more important as there might be high turnover of developers, a large code base without any one or a few people knowing the whole code base. Consistency and standardization are favored as they want to be able to have programmers be as fungible as possible.

          You can see this in Golang, especially considering one of it’s explicit intended goals was to service Google’s needs. The fact that it can be used outside of that context is great but the goal of the language was in service to Google, just as Rust was in service to Mozilla and it’s browser development effort. The same could also be said of Java as it was marketed as “business friendly” language for basically the reasons listed above.

          The speaker goes on to talk about Raku, which I guess is what Perl6 turned into (?), as being one of the fun languages with a friendly community.

          So I think it’s a little reversed. It’s more like, most free software is written by a single or small number of people, and this workflow has a selection bias of a particular type of language, or a language that favors a particular set of features while discouraging others.

          1. 2

            Yikes, I wouldn’t touch perl with a barge pole :)

            I understand the idea behind small teams better being able to handle a codebase filled with dynamic magic and various other “spooky action at a distance”, but the problem isn’t just how much cognitive load you’re wasting getting up to speed, it’s the defects you build because so much is hidden and things you thought would be orthogonal end up touching at the edges.

            1. 4

              I hear what you’re saying and I think there’s a lot of validity to it but there’s a lot of subtlety and shades of gray that you’re glossing over with that argument.

              So here’s a weak attempt at a counter argument: all that dynamic magic or other trickery that might end up messing things up for beginner/intermediate programmers, or even programmers that just aren’t familiar with the trickery/context/optimizations, are not such a big deal for more experienced programmers, especially ones that would invest enough time to be the primary maintainer.

              It’s not that one method is inherently better than the other, it’s that the context of how the code is created selects for a particular style of development, depending on what resources you’re optimizing for. Large company with high turnover and paid employees: “boring” languages that don’t leave a lot of room for rock star programmers. Individual or small team: choose a language that gives space for an individual programmers knowledge to shine.

              I saw a talk (online) by Jonathan Blow on “How to program independent games” and, to me at least, I see a lot of similarities. The tactics used as a single developer vs. a developer in a team environment are different and sometimes go against orthodoxy.

              There’s no silver bullet and one method is not going to be the clear winner in all contexts, on an individual or corporate level, but, to me at least, the idea that development strategies change depending on the project context (corporate vs. individual) is enlightening and helps explain some of the friction I’ve encountered at some of the jobs I’ve worked at.

              1. 1

                Right, but if you’re the only person working on an OSS project you’re not doing it full time, and you’re also (probably) working on a wide variety of other code 9-5 to pay the rent, basically meaning whenever you’ve got some time for it, you’re always going to be coming back to your OSS code without having it all fresh in working memory.

              2. 4

                Raku really isn’t Perl. I don’t know enough Perl to have an opinion on it one way or the other (though its sub-millisecond startup time – about an order of magnitude faster than Python and about the same as Bash – give it a pretty clear niche in some sysadmin settings). But they are definitely different languages.

                The analogy I’d pick is that their relationship is a lot like the relationship between go and C. Go (Raku) was designed by people who were deeply familiar with C (Perl) and think that it got a lot right on a philosophical level. At the same time, the designers also wanted to solve certain (in their view) flaws with the other language and to create a language aimed at a somewhat different use case. The result is a language that shares the “spirit” of C (Perl), but that makes very different tradeoffs and thus is attractive to a different set of users and is not at all a replacement for the more established language.

                1. 1

                  TBH I know nothing about Raku. I vaguely remember the next version of Perl being right around the corner for years, but by then I’d had enough of line noise masquerading as source code (which all Perl written to be “clever” definitely was at the time) so I wasn’t paying attention.

                2. 4

                  There’s an argument that you might not introduce those defects if you’re a single person working on a project because the important bits stay in your head and you remember what all the implicit invariants are.

                  I personally buy a weak form of that argument: things developed by individual developers do tend to have webs of plans and invariants in those individuals’ heads. AFAIK there’s some reasonable empirical research indicating that having software be modified by people other than the original authors comes with a higher rate of defects. (Lots of caveats on that: e.g. perhaps higher quality software that has its design and invariants documented well does not suffer from this.)

                  I’m told that hardware / FPGA designers tend to be much more territorial about having other people touch their code than software people, because of the difficulty of re-understanding code after it has been edited by someone else, because hardware contains a greater density of tricky invariants than software.

                  1. 3

                    (disclaimer: video author)

                    [The larger problem is] the defects you build because so much is hidden and things you thought would be orthogonal end up touching at the edges.

                    I 100% agree with this; spooky action at a distance is bad, and increases cognitive load no matter what. However, I think a language can be both dynamic/high context and prevent hidden interaction between supposedly orthogonal points.

                    Here’s one example: Raku lets you define custom operators. This can make reading code harder for a newcomer (you might not know what the symbols even mean), but is very expressive for someone spending more time in the codebase. Crucially, however, these new operator definitions are lexically scoped, so there’s no chance that someone will use them in a different module or run into issues of broken promises of orthogonality. And, generalizing a bit, Raku takes lexical scope very seriously, which helps prevent many of the sorts of hidden issues you’re discussing.

                    (Some of this will also depend on your usecase/performance requirements. Consider upper-casing a string. In Raku, this is done like 'foo'.uc, which has the signature (available via introspection) of Str:D --> Str:D (that is, it takes a single definite string and returns a new definite string). For my usecase, this doesn’t have any spooky action. But the zig docs talk about this as an example of hidden control flow in a way that could have performance impacts for the type of code Zig is targeting).

                    1. 3

                      Yikes, I wouldn’t touch perl with a barge pole :)

                      I think that’s unnecessarily mean…

                      1. 1

                        Yikes, I wouldn’t touch perl with a barge pole :)

                        Arguably Perl isn’t Raku.

                    2. 1

                      Can you imagine Free Software being produced with MUMPS? Some languages are only viable in certain corporate environments.