Threads for gavinhoward

  1. 14

    What does evil mean in this case? To me, it means a EULA that:

    1. gives more advantage to the software creator/seller than the customer, when it should be equal, and/or
    2. absolves the creator/seller of responsibility.

    With that in mind, I think Mr. Mitchell is right in general.

    In practice, how many EULA’s are not evil? I think Mr. Mitchell has probably seen a few, probably written a few. I think that may be because his work is probably mostly between two business customers.

    But as an individual software consumer, I have never seen such a EULA. I’ve only seen EULA’s that are “evil” (which I define as a EULA that gives more advantage to the software creator than the customer, when it should be equal), at least in the software that I use.

    That is why I use Open Source software as much as possible: that extreme is better than the other, of being locked down by EULA’s that advantage the creator over me or absolve the creator of responsibility.

    This is in addition to a few of the other problems mentioned by comments on the orange site.

    So in theory, practice and theory are no different; in practice, they are.

    But it would be a good world if Mr. Mitchell was right in practice, as long as customers always had access to source code and the ability to modify it, and the creator/seller accepted responsibility for it.

    1. 7

      It’s also a rare case of arguing for a strawman. ;)

      The customer can fix bugs and make improvements. Better yet, while they pay, they can foist this off on the seller. What the customer could usually only hope for in the best case with open source—a timely response from a responsible person—they now get by right, under contract.

      The problem is that for all practical purposes such an EULA doesn’t exist. No software developer can be expected to do any work without an incentive to do that. If there’s a problem the vendor isn’t personally having and leaving it unfixed does not lead to significant user base shrinking, they will not do it, and who can blame them?

      EULAs with negative financial incentives for the vendor to fix bugs/add features or solve your problems kinda do exist, but they are out of reach even for average business users. There are also many ways to game those incentives by sticking to their letter without keeping their spirit, e.g. giving a reply within the SLA timeframe but not supplying any actually useful information there. And then even the most well-intentioned support providers should be wary of any guarantees to solve the problem, because anyone can take you to court and claim you didn’t solve their problem. That’s not counting cases when a problem really can be solved.

      If people can share modified versions, there may be hope that someone is having the same problem as you but better programming skills and will make a patch. In a “use, but don’t distribute” model there’s no way for anyone to share it, so even if someone fixes it, you will not benefit from it.

      P.S. For the record, I do make money from open-source software with a “pay for precompiled binaries and support” model, and we do have a EULA covering the binaries with proprietary and trademarked artwork embedded in them (exactly like RedHat and Mozilla do). But we don’t give any guarantees of solving user’s problem because it would be false advertising. We don’t game SLA response time guarantees that we give, because we all had multiple proprietary vendors play that trick on us, and we hate it.

      1. 2

        The problem is that for all practical purposes such an EULA doesn’t exist. No software developer can be expected to do any work without an incentive to do that.

        Licenses like that get signed all the time.

        The vendor’s support commitment requires them to respond timely to requests. Where those requests identify bugs, the result is a fix, which the vendor’s maintenance commitment requires it to provide. If the vendor falls down on its support commitment, the customer gets credits against its fees, and possibly an out from the contract. The vendor’s incentivized to avoid that.

        Blank-check commitments for new features aren’t so common, for obvious reasons. I have seen many deals where particular roadmap points got included in the terms. But even without a hard commitment to develop new features, support request → roadmap item → minor release stories are very common. Features existing customers ask for can be features potential customers want.

        1. 3

          Maybe if you pay hundreds thousands dollars… Haven’t seen anything like that from proprietary vendors in the thousands or tens of thousands dollar range.

          I have seen bugs in very expensive software go unfixed for months and years just because they weren’t affecting the majority of users, and it wasn’t a real threat to the vendor. I did see many prompt fixes, mind you, but my point is that in the vast majority of cases there’s no guarantee that your bug will be fixed, no matter how critical is it to you.

          1. 3

            I have definitely seen good support SLAs in that range of ARR. And happy customers who never had to send a ticket!

            I hear you on languishing bugs in pricey software. It happens. Frustratingly, sometimes it’s just efficient.

            I have also seen side deals where customers ponied up extra to get a bug squashed or a feature added, even one arguably covered by their existing support-maintenance deal. The remedies under that deal weren’t enough to move the vendor, and the loss from the customer walking came in under the cost of the fix. Another day at the office.

            I think we’d probably be better off in a wold with more source available deals like the one I outlined in the blog post: fix it yourself if you want to, just send the patch back to the vendor. But often enough, the vendor’s still the one who can get the work done with lowest cost, especially when the customer isn’t a “software company”. The vendor has the people who know the code.

      2. 4

        In practice, how many EULA’s are not evil? I think Mr. Mitchell has probably seen a few, probably written a few. I think that may be because his work is probably mostly between two business customers.

        But as an individual software consumer, I have never seen such a EULA. …

        I really appreciate this comment.

        I was 50-50 on the second part of the blog post, after the <hr>. The bit where I speculate on why this black-white view persists. I knew my speculation’s informed, but speculation’s always a jump. I worried it would distract or detract more than it added. I’m glad I went with it now.

        The HN discussion already has two or three threads where I see someone with software deals experience butting heads with someone who only sees take-it-or-leave-it terms. Here you’ve hit the same insight with introspection, without all the noise and squabble. It’s nice to be here on lobste.rs ;-D

        The relevant bit of my post was here:

        [The black-and-white view of licensing] reinforced a stark, limited view of the industry—characteristic of young and commercially sheltered coders dealing entirely with dominant apps from megacorps, on one hand, and open source from hackers, on the other—as all relevant reality. There wasn’t any burgeoning gray area of independent software sales in this picture…. No experience of negotiation—engineering terms to circumstances—to speak of. No rallying call to develop those chops.

        When a vendor has you over a barrel—market power, exclusive deal—the terms they deign give are more than likely bad news. Same with the software, for that matter. Microsoft doesn’t need to give us source code to keep Word in pole position. If a hardware vendor controls their tech, the dev kit can be lousy.

        As solo devs cruising the web for code, we have no leverage. The cost of just discussing terms with us would eat all our value as potential customers. So terms are take-it-or-leave-it.

        The reason I’m on about this to solo devs and indie firms is that I think we could get a lot more out of ourselves and each other if we accepted that deals engineering is an art we can get good at that could unlock a ton of latent potential. Some of this is business basics: many firms now offer multiple take-it-or-leave-it deals, at different “tiers”, with a features grid. As programmers, we have the skills to do that and so much more. And we have the skills to automate it—software configuration, legal terms, payment, the whole shebang.

      1. 13

        I’m the author of the article that Mr. Mitchell is responding to.

        tl;dr: I think Mr. Mitchell has good points, but I also feel like he did not address the parts of my post that actually discouraged me.

        I didn’t expect to have anyone respond with a blog post like this. That said, Mr. Mitchell has some good points.

        First, he is right that my original article is too absolutist. I will own that mistake. Please remember, however, that the post was written in the middle of a depressive episode, when I was feeling more discouraged than I had been in a long time. I wasn’t thinking completely straight.

        Second, he makes a good point about trying hard to serve people, but what he missed is that I had done that. In the types of software I want to write, I could see very well that closed source was not going to fly with my intended audience. @alexandria said (in this thread):

        People will only pay for closed source software if they can’t already acquire a ‘good enough’ alternative for free.

        As far as build systems and version control systems go (the top two projects I am working on), there are plenty of ‘good enough’ alternatives that are Open Source, so developers only want Open Source, for the most part. Of course, there are companies that will use closed source, if it is better, but individual developers, by and large, don’t, especially because they often don’t need the features that make the closed source ones better.

        The reason I am targeting individual developers is because they are the most likely to be “early adopters” of new technology. I was hoping that, after getting enough of them as users, they would convince their employers to start using my software.

        Third, and this follows from the previous point, Mr. Mitchell has himself said that, over time, software moves from closed source to Open Source, then to public domain. I had read that post of his before posting mine, and I understood what he was saying. I guess what I did not say well is that, for the software I was planning to write, the transition from closed source to Open Source has already happened, which means that going closed source would not work.

        Fourth, I don’t feel like Mr. Mitchell addressed the points from my original post that actually made me discouraged: the appropriation of Open Source by companies who fail to give back or outright steal (by violating FOSS licenses). Those things are the reason I was discouraged because, even if I do get individual programmers to use my software, then they get their employers to use it, what will prevent their employers from just ripping me off and violating the license I put on my code?

        The fact that developers will use closed source in some cases, as Mr. Mitchell says, does nothing to address these problems.

        With that said, I don’t regret putting that post out and submitting it here and to the orange site, and that’s because my “call to action” at the bottom, asking people to contact me if they knew things that could encourage me, worked. A lot of people emailed me with encouragement, and that eventually brought me out of my funk.

        One person in particular, who I won’t name in case he does not want to be named, wrote to me about an article he wrote a long time ago about how it’s possible to make a living selling closed source software that is only closed source temporarily before being open sourced after a certain amount of time. He helped me see why that works, and he also helped me figure out a method for not violating my ethics in doing so.

        (My ethics include always providing source code to my users, like an architect should provide blueprints to the owner of the building they designed. What this person helped me realize is that even if it’s closed source, I can provide source code to my customers with a license that prevents them from redistributing it. Yes, this is an argument for copyright still applying to software.)

        That, along with Mr. Mitchell’s assertion that all software eventually moves toward the public domain, helped me form a plan.

        First, I’m going to get the software ready, of course. But when it’s ready, I’m going to release it under the terms of the two most radioactive licenses possible: the SSPL and the AGPL, and users will have to comply with the terms of both. This shouldn’t matter for individual developers.

        However, it will matter to companies, so next, I will do my best to make sure my users know that they can ask me to relicense more permissively once they are asking their employers to use my software. When they ask, I will.

        I will also develop closed source add-ons that companies can use, and these add-ons will be open sourced after a certain period.

        In essence, my software will follow the transition from closed source to copyleft to permissive licensing that Mr. Mitchell described; it’s just that instead of more permissively-licensed competitors rising up and out-competing me, I’ll relicense my stuff to prevent the need for competitors to do so.

        That does beg the question of why I wouldn’t start permissively licensed in the first place, and the answer is to make my software radioactive enough that at least some companies won’t touch it at the beginning. It’s a game of numbers because I don’t have to prevent all companies from ripping me off, just enough of them. And after my software grows more important (if that ever happens), then I suspect that companies would be less likely to rip me off, even if the software is permissively licensed.

        It’s funny, but Mr. Mitchell did help me, in a way, with his post about the lifecycle of software licenses.

        1. 8

          Forgive me for catching up with you here! I’d made a note to send you an e-mail, properly, which I try to do whenever I blog a response to someone else’s post. But Saturday caught up with me. I got the post out, and didn’t look at my to-dos again until this morning.

          Note to self: Publish the post, then send the e-mail. Don’t put it off!

          I hear you on discouragement and depression. Man is that real, and I’m inspired by how honest and open you are about it. I wish I’d thought more about where you may have been mentally as I wrote, and done more to emphasize a helpful rather than corrective tone. If I’d caught you still in it, and come across too harsh—easy to read that way when you’re down, don’t I know it!—I could have done you wrong. I’m happy to read that my post found you standing firmer on your feet. I just got lucky there. Also that others stepped up with so much encouragement. A little faith in community restored.

          As for company and user misbehavior: oh yeah, that’s real. And I’m really fucking tired of it. And I’m probably doing disservice by taking it as a given, whenever I write. By focusing just on the jump from frustration to resignation, without honoring the frustration to begin with, my post falls short of a complete picture. Your notes there are very well taken.

          On licensing, I’d encourage you to consider a license choice that more clearly expresses your intentions. A mad-science hybrid of AGPL and SSPL will definitely freak people out. But if what you really want to say is “businesses needs to talk to me about a deal”, you might find that better expressed through a noncommercial license like PolyForm Noncommercial or Prosperity, which also allows free trials. More experimentally, you might find Big Time interesting.

          Whichever way you go, good luck!

          PS: No need for “Mr. Mitchell”, unless you prefer that way. Kyle, or kemitchell, do me fine. And kyle@kemitchell.com anytime.

          1. 2

            First, I’m going to get the software ready, of course. But when it’s ready, I’m going to release it under the terms of the two most radioactive licenses possible: the SSPL and the AGPL, and users will have to comply with the terms of both. This shouldn’t matter for individual developers.

            However, it will matter to companies, so next, I will do my best to make sure my users know that they can ask me to relicense more permissively once they are asking their employers to use my software. When they ask, I will.

            That depends a lot on the company. I’d have to check our policy but I believe it means that we could use it, we could maintain our internal fork, but we’d need to jump through approval hoops to contribute anything back. Licenses like the AGPL are problematic if we want to incorporate them into a product or ship them to customers (or, in general, do anything that involves the code leaving the company) but they are fine for use.

            The critical thing for a company (which I’d assume @kemitchell knows, since he is a corporate lawyer and this is literally his day job) is minimising risk. The license is one aspect of this. Dual licensing doesn’t really help here because it lets you choose between risks (the risk the supplier will go away for the proprietary license versus the risks associated with a less permissive license). If your dual license allows people to pay for a more permissive license (e.g. MIT) then you now have a risk that someone will distribute the version that they receive.

            For a single developer, the largest risk that a company is likely to worry about is the bus factor. If you get hit by a bus, what happens to the software? That’s a massive risk for anything that’s going to be a core part of the work flow. There’s a big difference between buying a proprietary product from a big company and a proprietary product from some guy, especially if it’s a product with a lot of users and that is bringing in a lot of revenue.

            Open vs closed is one of the less important concerns within an overall risk discussion for most companies.

            1. 2

              That depends a lot on the company. I’d have to check our policy but I believe it means that we could use it, we could maintain our internal fork, but we’d need to jump through approval hoops to contribute anything back. Licenses like the AGPL are problematic if we want to incorporate them into a product or ship them to customers (or, in general, do anything that involves the code leaving the company) but they are fine for use.

              That actually sounds perfect, to be honest, including not giving back code. I’m interested in companies contributing back in general, but for my own purposes, I’d rather not incorporate code copyrighted by Microsoft into my repo.

              That said, I don’t really like the SSPL and will probably remove the requirement for it soonish after the code is published.

              The critical thing for a company (which I’d assume @kemitchell knows, since he is a corporate lawyer and this is literally his day job) is minimising risk. The license is one aspect of this.

              I think I understand the position companies have on risk, and I want to do my best to make risk minimization the real product I am selling.

              Dual licensing doesn’t really help here because it lets you choose between risks (the risk the supplier will go away for the proprietary license versus the risks associated with a less permissive license). If your dual license allows people to pay for a more permissive license (e.g. MIT) then you now have a risk that someone will distribute the version that they receive.

              Those risks are partially why I’m not going to dual license the core.

              For a single developer, the largest risk that a company is likely to worry about is the bus factor. If you get hit by a bus, what happens to the software? That’s a massive risk for anything that’s going to be a core part of the work flow.

              Yes, I agree, and it is a weakness of what I would like to do. But I do have some techniques for reducing the impact of the risk from the bus factor.

              First, I document my code heavily. You can see this with my bc especially. The development manual in bc is the largest file, by far, in that repo. But I didn’t stop there. I commented every piece of code heavily so that someone else could go in, follow what I was doing, and be able to understand it. This reduces the impact of the bus factor by making it so users can have the backup plan of fixing bugs themselves if I get hit by a bus, and that backup plan has a chance of working.

              Second, I create extensive test suites. Once again, bc is the example. The test suite is so extensive that I feel comfortable making massive experimental changes and just running the test suite (usually under Valgrind or ASan) to see if there was any regression. Should I get hit by a bus, the test suite then becomes a tool for anyone else wanting to make changes to do so without fear, just like me, which I believe reduces the impact of the bus factor.

              Third, companies can pay for the privilege of making the time factor a non-issue, and by “time factor,” I mean the possibility that I don’t have enough time or motivation to respond to their bug reports in a timely manner. But that’s the risk that they themselves have to mitigate; I can’t help with that.

              There’s a big difference between buying a proprietary product from a big company and a proprietary product from some guy, especially if it’s a product with a lot of users and that is bringing in a lot of revenue.

              I agree. In fact, it’s why I am doing all of the stuff I mentioned above. Doing those sorts of things brings a one-man project closer to a product from a big company. I think it’s why a project like Curl, which basically has a bus factor near 1, is so successful and widely used.

              Sorry, said to much, but tl;dr, you are right about risk, I know you are right, and I’m doing my best to mitigate that.

              1. 3

                I’d also add that risk and perceived risk are both important. It sounds as if the risk is low but I’m not sure what to suggest for reducing the perceived risk. A company has to do a lot of analysis of your code to understand how difficult it would be for someone to take over but that’s probably more work than most companies would do. You might be able to do some equivalent of underwriting: have some other company like Red Hat or Canonical provide maintenance contracts that they subcontract to you.

                With curl, the reason that the risk is low is that, to a first approximation, everybody depends on curl. This means that, if Daniel were hit by a bus then everyone is equally screwed. If you are not the biggest company depending on curl then you can depend on someone else leading the effort to pick up maintenance costs.

          1. 18

            This article incorrectly states that Zig has “colored” async functions. In reality, Zig async functions do not suffer from function coloring.

            Yes, you can write virtually any software in Zig, but should you? My experience in maintaining high-level code in Rust and C99 says NO.

            Maybe gain some experience with Zig in order to draw this conclusion about Zig?

            1. 5

              Not sure if he changed the text but the article mentions the async color problem such that it could be considered applying generally. But the article doesn’t link that to Zig explicitly or did I miss it?

              It would be fair to mention how Zig solved it as he mentions it for Go.

              1. 9

                This response illustrates the number one reason I am not a fan of Zig: its proponents, like the proponents of Rust, are not entirely honest about it.

                In reality, Zig async functions do not suffer from function coloring.

                This is a lie. In fact, that article, while a great piece of persuasive writing, is also mostly a lie.

                It tells the truth in one question in the FAQ:

                Q: SO I DON’T EVEN HAVE TO THINK ABOUT NORMAL FUNCTIONS VS COROUTINES IN MY LIBRARY?

                No, occasionally you will have to. As an example, if you’re allowing your users to pass to your library function pointers at runtime, you will need to make sure to use the right calling convention based on whether the function is async or not. You normally don’t have to think about it because the compiler is able to do the work for you at compile-time, but that can’t happen for runtime-known values.

                In other words, Zig still suffers from the function coloring problem at runtime. If you do async in a static way, the compiler will be able to cheese the function coloring problem away. In essence, the compiler hides the function coloring problem from you when it can.

                But when you do it at runtime and the compiler can’t cheese it, you still have the function coloring problem.

                I think it is a good achievement to make the compiler able to hide it most of the time, but please be honest about it.

                1. 17

                  Calling this dishonest and a lie is incredibly uncharitable interpretation of what is written. Even if you’re right that it’s techncially incorrect, at worst it’s a simplification made in good faith to be able to talk about the problem, no more of a lie than teaching newtoniam mechanics as laws of physics in middle school is a lie because of special relativity, or teaching special relativity in high school is a lie because of general relativity.


                  Also, I’m not familiar with zig, but from your description I think you’re wrong to claim that functions are colored. Your refutation of that argument is that function pointers are colored, but functions are a distinct entity from function pointers - and one used much more frequently in most programming languages that have both concepts. Potentially I’m misunderstanding something here though, there is definitely room for subtlety.

                  1. 11

                    Calling this a dishonest and a lie is incredibly uncharitable interpretation of what is written.

                    No, it’s not. The reason is because they know the truth, and yet, they still claim that Zig functions are not colored. It is dishonest to do so.

                    It would be completely honest to claim that “the Zig compiler can make it appear that Zig functions are not colored.” That is entirely honest, and I doubt it would lose them any fans.

                    But to claim that Zig functions not colored is a straight up lie.

                    Even if you’re right that it’s techncially incorrect,

                    I quoted Kristoff directly saying that Zig functions are colored. How could I be wrong?

                    at worst it’s a simplification made in good faith to be able to talk about the problem, no more of a lie than teaching newtoniam mechanics as laws of physics in middle school is a lie because of special relativity, or teaching special relativity in high school is a lie because of general relativity.

                    There are simplifications that work, and there are simplifications that don’t.

                    Case in point: relativity. Are you ever, in your life, going to encounter a situation where relativity matters? Unless you’re working with rockets or GPS, probably not.

                    But how likely is it that you’re going to run into a situation where Zig’s compiler fails to hide the coloring of functions? Quite likely.

                    Here’s why: while Kristoff did warn library authors about the function coloring at runtime, I doubt many of them pay attention because of the repetition of “Zig functions are not colored” that you hear all of the time from Andrew and the rest. It’s so prevalent that even non-contributors who don’t understand the truth jump into comments here on lobste.rs and on the orange site to defend Zig whenever someone writes a post about async.

                    So by repeating the lie so much, Zig programmers are taught implicitly to ignore the truthful warning in Kristoff’s post.

                    Thus, libraries get written. They are written ignoring the function coloring problem because the library authors have been implicitly told to do so. Some of those libraries take function pointers for good reasons. Those libraries are buggy.

                    Then those libraries get used. The library users do not pay attention to the function coloring problem because they have been implicitly told to do so.

                    And that’s how you get bugs.

                    It doesn’t even need to be libraries. In my bc, I use function pointers internally to select the correct operation. It’s in C, but if it had been in Zig, and I had used async, I would probably have been burned by it if I did not know that Zig functions are colored.

                    Also, I’m not familiar with zig, but from your description I think you’re wrong to claim that functions are not colored. Your refutation of that argument is that function pointers are colored, but functions are a distinct entity from function pointers - and one used much more frequently in most programming languages that have both concepts. Potentially I’m misunderstanding something here though, there is definitely room for subtlety.

                    You are absolutely misunderstanding.

                    How can function pointers be colored? They are merely pointers to functions. They are data. Data is not colored; code is colored. Thus, function pointers (data that just points to functions) can’t be colored but functions (containers for code) can be.

                    If data could be colored, you would not be able to print the value of the pointer without jumping through hoops, but I bet if you did the Zig equivalent of printf("%p\n", function_pointer); it will work just fine.

                    So if there is coloring in Zig, and Kristoff’s post does admit there is, then it has to be functions that are colored, not function pointers.

                    In Kristoff’s post, there is this comment in some of the example code:

                    // Note how the function definition doesn't require any static
                    // `async` marking. The compiler can deduce when a function is
                    // async based on its usage of `await`.
                    

                    He says “when a function is async…” An async/non-async dichotomy means there is function coloring.

                    What the compiler does is automagically detect async functions (as Kristoff says) and inserts the correct code to call it according to its color. That doesn’t mean the color is gone; it means that the compiler is hiding it from you.

                    For a language whose designer eschews operator overloading because it hides function calls, it feels disingenuous to me to hide how functions are being called.

                    All of this means that Zig functions are still colored. It’s just that, at compile time, it can hide that from you. At runtime, however, it can’t.

                    And that is why Zig functions are colored.

                    1. 7

                      I have a hard time following all the animosity in your replies. Maybe I’m just not used to having fans on the internet :^)

                      In my article, and whenever discussing function coloring, I, and I guess most people, define “function coloring” the problem of having to mark functions as async and having to prepend their invocation with await, when you want to get their result. The famous article by Bob Nystrom, “What Color is Your Function?” also focuses entirely on the problem of syntactical interoperability between normal, non-async code and async, and how the second infects codebases by forcing every other function to be tagged async, which in turn forces awaits to be sprinkled around.

                      In my article I opened mentioning aio-libs, which is a very clear cut example of this problem: those people are forced to reinvent the wheel (ie reimplement existing packages) because the original codebases simply cannot be reasonably used in the context of an async application.

                      This is the problem that Zig solves. One library codebase that, with proper care, can run in both contexts and take advantage of parallelism when available. No async-std, no aio-libs, etc. This works because Zig changes the meaning and usage of async and await compared to all other programming languages (that use async/await).

                      You seem to be focused on the fact that by doing async you will introduce continuations in your program. Yes, you will. Nobody said you won’t. What you define as “cheesing” (lmao) is a practical tool that can save a lot of wasted effort. I guess you could say that levers and gears cheesed the need for more physical human labor, from that perspective.

                      Sure, syntax and the resulting computational model aren’t completely detached: if you do have continuations in your code, then you will need to think about how your application is going to behave. Duh, but the point is libraries. Go download OkRedis. Write an async application with it, then write a blocking applicaton with it. You will be able to do both, while importing the same exact declarations from my library, and while also enjoying speedups in the async version, if you allowed for concurrent operations to happen in your code.

                      But how likely is it that you’re going to run into a situation where Zig’s compiler fails to hide the coloring of functions? Quite likely.
                      Thus, libraries get written. They are written ignoring the function coloring problem because the library authors have been implicitly told to do so. Some of those libraries take function pointers for good reasons. Those libraries are buggy.

                      No. Aside from the fact that you normally just pass function identifiers around, instead of pointers, function pointers have a type and that type also tells you (and the compiler) what the right calling convention is. On top of that, library authors are most absolutely not asked to ignore asyncness. In OkRedis I have a few spots where I explicitly change the behavior of the Redis client based on whether we’re in async mode or not.

                      The point, to stress it one last time, is that you don’t need to have two different library codebases that require duplicated effort, and that in the single codebase needed, you’re going to only have to make a few changes to account for asyncness. In fact, in OkRedis I only have one place where I needed to account for that: in the Client struct. Every other piece of code in the entire library behaves correctly without needing any change. Pretty neat, if you ask me.

                      1. 2

                        I have a hard time following all the animosity in your replies. Maybe I’m just not used to having fans on the internet :^)

                        The “animosity” (I was more defending myself vigorously) comes from Andrew swearing at me and accusing me, which he might have had a reason.

                        In his post, he claimed I said he was maliciously lying, but I only said that he was lying. I separate unintentional lies from intentional lies, and I believe all of you are unintentionally lying. Because I realized he thought that, I made sure to tell him that and tell him what I would like to see.

                        In my article, and whenever discussing function coloring, I, and I guess most people, define “function coloring” the problem of having to mark functions as async and having to prepend their invocation with await, when you want to get their result. The famous article by Bob Nystrom, “What Color is Your Function?” also focuses entirely on the problem of syntactical interoperability between normal, non-async code and async,

                        In Bob Nystrom’s post, this is how he defined function coloring:

                        The way you call a function depends on its color.

                        That’s it.

                        Most people associate color with async and await because that’s how JavaScript, the language from his post, does it. But that’s not how he defined it.

                        After playing with Zig’s function pointers, I can say with confidence that his definition, “The way you call a function depends on its color,” does apply to Zig.

                        and how the second infects codebases by forcing every other function to be tagged async, which in turn forces awaits to be sprinkled around.

                        This is what Zig does better. It limits the blast radius of async/await. But it’s still there. See the examples from my latest reply to Andrew. I had to mark a call site with @asyncCall, including making a frame. But then, I couldn’t call the blue() function because it still wasn’t async. So if I were to make it work, I would have to make blue() async. And I could do that while still making the program crash half the time.

                        (Side note: I don’t know how to write out the type of async function. Changing blue() to async is not working with the [2]@TypeOf(blue) trick that I am using. It’s still giving me the same compile error.)

                        In my article I opened mentioning aio-libs, which is a very clear cut example of this problem: those people are forced to reinvent the wheel (ie reimplement existing packages) because the original codebases simply cannot be reasonably used in the context of an async application.

                        This is the problem that Zig solves. One library codebase that, with proper care, can run in both contexts and take advantage of parallelism when available. No async-std, no aio-libs, etc. This works because Zig changes the meaning and usage of async and await compared to all other programming languages (that use async/await).

                        This is not what you are telling people, however. You are telling them that Zig does not have function colors. Those two are orthogonal.

                        And I also doubt that Zig actually solves that problem. I do not know Zig, and it took me all of 30 minutes to 1) find a compiler bug and 2) find an example where you cannot run code in both contexts.

                        You seem to be focused on the fact that by doing async you will introduce continuations in your program. Yes, you will. Nobody said you won’t. What you define as “cheesing” (***) is a practical tool that can save a lot of wasted effort. I guess you could say that levers and gears cheesed the need for more physical human labor, from that perspective.

                        I have no idea what swear word you used there (I have a filter that literally turns swear words into three asterisks like you see there), but this is why I am not happy with Andrew. Now, I am not happy with you.

                        I used “cheesing” because while it is certainly a time saver, it’s still cheating. Yes, levers and gears cheese the application of force. That’s not a bad thing. Computers are supposed to be mental levers or “bicycles for the mind.” Cheesing is a good thing.

                        And yes, I am focused on introducing continuations into the program because there is a better way to introduce continuations and still get concurrency.

                        In fact, I am going to write a blog post about that better way. It’s called structured concurrency, and it introduces continuations by using closures to push data down the stack.

                        Sure, syntax and the resulting computational model aren’t completely detached: if you do have continuations in your code, then you will need to think about how your application is going to behave. Duh, but the point is libraries. Go download OkRedis. Write an async application with it, then write a blocking applicaton with it. You will be able to do both, while importing the same exact declarations from my library, and while also enjoying speedups in the async version, if you allowed for concurrent operations to happen in your code.

                        Where’s the catch? There’s always a catch. Please tell me the catch.

                        In fact, this whole thing is about me asking you, Andrew, and the others to be honest about what catches there are in Zig’s async story.

                        Likewise, I’m going to have to be honest about what catches there are to structured concurrency, and you can hold me to that when the blog post comes out.

                        No. Aside from the fact that you normally just pass function identifiers around, instead of pointers, function pointers have a type and that type also tells you (and the compiler) what the right calling convention is.

                        That is just an admission that functions are colored, if they have different types.

                        On top of that, library authors are most absolutely not asked to ignore asyncness. In OkRedis I have a few spots where I explicitly change the behavior of the Redis client based on whether we’re in async mode or not.

                        They are not explicitly asked. I said “implicitly” for a reason. “It’s not what programming languages do, it’s what they [and their communities] shepherd you to.” By telling everyone that Zig does not have function colors, you are training them to not think about it, even the library authors. As such, you then have to find those library authors, tell them to think about it, and explain why. It would save you and Andrew time if you just were upfront about what Zig does and does not do. And you would have, on average, better libraries.

                        The point, to stress it one last time, is that you don’t need to have two different library codebases that require duplicated effort, and that in the single codebase needed, you’re going to only have to make a few changes to account for asyncness. In fact, in OkRedis I only have one place where I needed to account for that: in the Client struct. Every other piece of code in the entire library behaves correctly without needing any change. Pretty neat, if you ask me.

                        That is neat. I agree. I just want Zig users to understand that, not be blissfully unaware of it.

                        1. 1

                          The “animosity” (I was more defending myself vigorously) comes from Andrew swearing at me and accusing me, which he might have had a reason.

                          You called me a liar in the first comment you wrote.

                          Where’s the catch? There’s always a catch. Please tell me the catch.

                          Since I’m such a liar, why don’t you write some code and show me, and everyone else, where the catch is.

                          1. 1

                            Since I’m such a liar, why don’t you write some code and show me, and everyone else, where the catch is.

                            Well, I don’t need to write code, but I can use your own words. You said that, “Every suspend needs to be matched by a corresponding resume” or there is undefined behavior. When asked if that could be a compiler warning, you said, “That’s unfortunately impossible, as far as I know.”

                            That’s the catch.

                            1. 2

                              Why would you even use suspend and resume in a normal application? Those are low level primitives. I didn’t use either in any part of my blog post, and in fact you won’t find them inside OkRedis either. Unless you’re writing an event loop and wiring it to epoll or io_uring, you only need async and await.

                              This is not a philosophical debate: talk is cheap, as they say, so show me the code. I showed you mine, it’s OkRedis.

                              1. 1

                                Why would you even use suspend and resume in a normal application? Those are low level primitives.

                                Then why are they the first primitives you introduce to new users in the Zig documentation? They should have been last, with a clear warning about their caveats, if you even have them in the main documentation at all.

                                This is not a philosophical debate: talk is cheap, as they say, so show me the code. I showed you mine, it’s OkRedis.

                                I’m not going to download OkRedis or write code with it. I only learned enough Zig to make my examples to Andrew compile, and I have begun to not like Zig at all. It’s confusing and a mess, in my opinion.

                                But if you think that the examples I gave Andrew are not good enough, I don’t know what to tell you. I guess we’ll see if they are good enough for the people that read my blog post on it.

                                But I do have another question: people around Zig have said that its async story does not require an event loop, but none have explained why. Can you explain why?

                                1. 3

                                  Then why are they the first primitives you introduce to new users in the Zig documentation? They should have been last, with a clear warning about their caveats, if you even have them in the main documentation at all.

                                  They’re the basic building block used to manipulate async frames (Zig’s continuations). First you complained that my blog post didn’t talk about how async frames work, and that I meant to deceive people by not talking about it, then you read the language reference and say it should not even mention the language features that implement async frames.

                                  With your attitude in this entire discussion, you put yourself in a position where you have an incentive to not understand things, even well established computer science concepts such as continuations. If we talk at a high level, it’s a lie, if we get into the details, it’s confusing (and at this point we know what you mean to say: designed to be confusing). I can’t help you once you go there.

                                  I’m looking forward to reading your blog post, although in all frankness you should consider doing some introspection before diving into it.

                                  1. 1

                                    They’re the basic building block used to manipulate async frames (Zig’s continuations). First you complained that my blog post didn’t talk about how async frames work, and that I meant to deceive people by not talking about it, then you read the language reference and say it should not even mention the language features that implement async frames.

                                    That’s the language reference? I thought it was the getting started documentation. Those details are not good to put in documentation for getting started, but I agree that they are good for a language reference. I would still put them last, though.

                                    With your attitude in this entire discussion, you put yourself in a position where you have an incentive to not understand things, even well established computer science concepts such as continuations.

                                    That’s a little ad hominem. I can understand continuations and not understand how they are used in Zig because the language reference is confusing. And yes, it is confusing.

                                    If we talk at a high level, it’s a lie, if we get into the details, it’s confusing

                                    It turns out that the problem is in your documentation and in your blog post. You can talk about it at a high level as long as your language about it is accurate. You can talk about the low level details once the high level subtleties are clarified.

                                    (and at this point we know what you mean to say: designed to be confusing). I can’t help you once you go there.

                                    I do not believe Zig was designed to be confusing, but after using it, I can safely say that the language design was not well done to prevent such confusion.

                                    As an example, and as far as I understand at the moment, the way Zig “gets around” the function colors problem is to reuse the async and await keywords slightly differently than other languages and uses suspend to actually make a function async. So in typical code, async and await do not have the function coloring problem. Which is great and all, but the subtleties of using them are usually lost on programmers coming from other languages.

                                    When I first heard about Zig, by the way, I was excited about it. This was back in 2018, I think, during the part of its evolution where it had comptime but not much more complexity above C. I thought comptime was great (that opinion has changed, but that’s a different story), and that the language looked promising.

                                    Fast forward to today: Zig is immensely more complex than it was back then, and I don’t see what that complexity has bought.

                                    That’s not a problem in and of itself, but complexity does make things harder, which means the documentation should be clearer and more precise. And the marketing should be the same.

                                    My beef with Zig boils down to those things not happening.

                                    Well, okay, I do have another beef with Zig: it sets the wrong tone. Programming languages, once used, set the tone for the industry, and I think Zig sets the wrong tone. So does Rust for that matter. But I can talk about that more in my blog post.

                                    I’m looking forward to reading your blog post, although in all frankness you should consider doing some introspection before diving into it.

                                    I have done introspection. I’ve learned where the function coloring problem actually is in Zig, and I’ve adopted new language to not come off in the wrong way. And I’ll do that in my blog post.

                2. 3

                  For me, the coloring problem describes both the static and runtime semantics. Does Zig handle the case where a function called with async enters some random syscall or grabs a mutex that blocks for a long time and isn’t explicitly handled by whatever the runtime system is or does that end up blocking the execution of other async tasks?

                  The reason why the runtime semantics matter to me when it comes to concurrency is because if you can block threads, then you implicitly always have a bounded semaphore (your threadpool) that you have to think about at all times or your theoretically correct concurrency algorithm can actually deadlock. That detail is unfortunately leaked.

                  1. 6

                    If you grab a standard library mutex in evented I/O mode then it interacts with the event loop, suspending the async function rather than e.g. a futex() syscall. The same code works in both contexts:

                    mutex.lock();
                    defer mutex.unlock();
                    

                    There are no function colors here; it will do the correct thing in evented I/O mode and blocking I/O mode. The person who authored the zig package using a mutex does not have to be aware of the intent of the application code.

                    This is what the Zig language supports. Let me check the status of this feature in the standard library… looks like it’s implemented for file system reads/writes but it’s still todo for mutexes, sleep, and other kinds of I/O. This is all still quite experimental. If you’re looking for a reason to not use Zig, it’s that - not being stable yet. But you can’t say that Zig has the same async function coloring problem as other languages since it’s doing something radically different.

                    1. 4

                      Thanks for the explanation and standard library status information.

                      I think the ability to make a function async at call time rather than at definition time is the best idea in Go’s concurrency design, and so, bringing something like that to a language with a much smaller runtime and no garbage collector is exciting. I look forward to seeing how this, and all of the other interesting ideas in Zig, comes together.

                      (p.s. thanks so much for zig cc)

                1. 4

                  @pmeunier, I have questions. I’m interested in patch theory and patch-based version control.

                  However, I just can’t understand many things about it.

                  Here’s what I understand:

                  • The idea of using patches instead of snapshots.
                  • Patch commutation and how that helps with better merging.
                  • Most of being able to figure out if a patch depends on another patch.
                  • Pijul’s pristine can store conflicts.

                  What I don’t understand:

                  • Darcs merge algorithm.
                  • Why it goes exponential and when.
                  • Why storing conflicts in the pristine avoids the problem.
                  • Is Pijul’s pristine just a cache that can also store conflicts?

                  If you can explain those things to someone who struggles to read academic material, that would be great, although I do know that your work has been stolen before, so if you don’t want to explain, I understand.

                  1. 3

                    I don’t know how Darcs worked, but it doesn’t have a datastructure independent from the patches: the patches are applied to a plain text version of the repository. In the absence of conflicts, every patch applied needs to be checked with all the patches since the last tag for commutation. Applying n patches cannot be faster than n^2.

                    When there are conflicts, I’m not sure anyone knew why it went exponential time, but it’s apparently fixed now (still quadratic, whereas Pijul is log).

                    Pijul’s pristine is not “just a cache”, it’s a CRDT. You can think of it as a cache if you want.

                  1. 12

                    I have to say good for DHH for managing to make something out of Open Source. But I agree with @kline: just because DHH reaped rewards, while not intending to, is no argument for others to follow in his footsteps.

                    I make the argument why in The Social Contract of Open Source, but essentially, Open Source is draining two scarce resources: time and energy of maintainers. If companies do not want those scarce resources to dry up, they should help buffer them with another resource: money. And since time is money, giving money to maintainers may actually give them more time.

                    Companies need to learn that they need to take care of FOSS in order for FOSS to take care of them.

                    1. 12

                      I don’t necessarily disagree with anything you said, but it’s weirdly slanted toward companies. You can just make stuff for yourself, and like minded people, and if companies contact you, simply ignore them.

                      To me the problem is that once you take money from companies, you are consciously or unconsciously beholden to their interests. I’d rather have that contract explicit in the form of a paycheck

                      1. 2

                        I guess I could have made that clearer, that I think FOSS maintainers have the right to enforce the social contract in any way they want. That does include ignoring companies if that is what is desired. And that is the desire in many cases. I think I slanted it toward companies because they get upset when FOSS maintainers do what the maintainers want instead of what the companies want.

                        I think that however FOSS maintainers enforce the social contract is fine, including taking a paycheck. For me, personally, I suck at getting hired, so getting companies to pay me as a contractor would probably be easier.

                      2. 3

                        Money doesn’t get me any more time, though. Unless maybe it’s enough money for me to quit any other jobs I have, and I happen to want to quit those jobs

                        1. 1

                          Hard agree. I’d say there are two issues:

                          1. Most OSS projects do not make it easy or obvious how to support them financially.
                          2. Most OSS projects do not require financial support in their license.

                          2 kinda solves 1. Put, in plain English, how and when you expect the company to pay.

                          Using the MIT or BSD license while also asking businesses to pay is a recipe for disappointment.

                        1. 4

                          I just want to address one line in the “Dependency Risk and Funding” post:

                          Daniel Stenberg of curl doesn’t wield that power (and probably also doesn’t want to).

                          Knowing what I know about Daniel, he probably does not want that power, as the author says.

                          However, he absolutely has that power.

                          Daniel could hide deliberate vulnerabilities in Curl that would allow him to take control of every machine running it. He could also hide code that would destroy those machines. In fact, he could hide code in Curl to delete itself and whatever code is using it, as well as mirrors of it, thus effectively wiping Curl off of the face of the Earth, even more so than what Marak did.

                          Just because people mirror Daniel’s code does not mean he doesn’t have the power to do serious damage.

                          1. 4

                            However, he absolutely has that power.

                            I think you missed the point. Let me explain.

                            I have commit and release power over a very popular library (Harfbuzz) that goes into millions of machines too. I also have distro packager signing keys for more than one Linux distro including Arch Linux. The issue here is not commit or even release power, the issue is visibility. I know full well that every freaking character I commit gets scrutinized by several folks with even more programming chops than myself. Even if I turned evil and wanted to hijack something I would be called up short and tarred and feathered so fast I’d never recover.

                            Daniel is in a similar boat. Not only is he a known entity but the code he writes is directly scrutinized by others and he would have to be very devious indeed over a long haul to get something really dangerous past all the watchers.

                            The NPM and other similar ecosystems with deep dependency trees (where most people writing and release apps don’t even know where most of the code is coming from at compile time) are different. It is ultimately quite easy to write and maintain something trivial but “useful” and then hijack high profile projects with a dependency attack in a way that it is not easy for actual maintainers of high profile projects to do directly on their own projects.

                            I believe that’s what the article was referring to when it said Daniel doesn’t have the same power. He would have to work a lot harder to get even something trivial through compared to how a lone actor deep in a Node dependency tree could so easily affect so many projects.

                          1. 3

                            I have gone a similar route the past years with HexaPDF. My goal was to get an adequate side income next to my 40h job, so it was clear for me that I wouldn’t sell support as main income because that would take too much time away from the main development.

                            The PDF library is dual-licensed AGPL and a commercial license. The reason for the license choice was mainly that I wanted to provide a command line tool for manipulating PDFs. So everyone can use the AGPL version without much thinking about the license terms. And companies will choose the commercial license. And this works. I’m not sure how dual-licensing would work in your case, e.g. with a build system.

                            HexaPDF fills a niche where no other similar product existed/exists. And it still took a long time to get somewhere business-wise. I guess that mainly comes down to me not doing enough marketing and sales, and having a product that is not needed by that many companies.

                            I started the company in 2018 and now, 3.5 years later, I have about 25 paying customers. With about double the number I will have the side income I initially targeted. We will see how it goes :)

                            If you want to have 2 times the typical developer salary, I would do a market research and see how many companies would benefit from your software. If your product is better, than the companies will happily pay.

                            1. 1

                              I’m not sure how dual-licensing would work in your case, e.g. with a build system.

                              It’s going to be distributable as a library, but it’s also going to be a distributed build system, like Bazel.

                              If you want to have 2 times the typical developer salary, I would do a market research and see how many companies would benefit from your software. If your product is better, than the companies will happily pay.

                              I hope this is true, though I suspect that I also have to prove to the companies that it’s better. And not just better, but far better (because of inertia). I think that may be the hardest part.

                              1. 2

                                I hope this is true, though I suspect that I also have to prove to the companies that it’s better. And not just better, but far better (because of inertia). I think that may be the hardest part.

                                Isn’t that the beauty of dual-licensing? That you can do everything completely in the open and let companies try everything out without them needing to ask any licensing questions or do upfront payment?

                                I don’t think that your product has to be far better than all the others, it just has to have a business advantage for your customers.

                                For example, although I think HexaPDF is great (naturally :) I know that it is by far not feature complete and there are commercial libraries in other languages that are much better in various regards. Yet, I have one customer who came across HexaPDF, tested it for their use case and found it superior to all other tools they tried but still not optimal for what they wanted. So I worked with them for months before they bought a license, and in the process made HexaPDF better for everyone.

                                1. 1

                                  Isn’t that the beauty of dual-licensing? That you can do everything completely in the open and let companies try everything out without them needing to ask any licensing questions or do upfront payment?

                                  Depends on the license. AGPL is famous for being entirely forbidden in some larger companies (e.g. Google), so for people working there, the product doesn’t exist until the other license is acquired.

                                  This doesn’t mean “don’t use the AGPL”, but the dynamic you’re envisioning might not work out for any number of reasons.

                                  1. 1

                                    Ah, I heard about that some time ago. So this means nobody there is using any application/library that is AGPL licensed, even if it would come by default with the OS?

                                    1. 2

                                      The operating systems to use at Google are well curated - Linux would be https://en.wikipedia.org/wiki/GLinux. As the policy is “no AGPL”, my guess (I work at Google but didn’t check the licenses) is that the GLinux maintainers simply don’t (re-)package such software.

                            1. 6

                              Lots of companies pay for software. There’s a whole giant industry selling commercial software… which leads to the question of why not making it proprietary.

                              1. What sort of product is it?:
                              2. Who would benefit from availability of source?
                              3. Who would benefit from it being open source (you can give people source code under a proprietary license, so this is a different question than the previous one)?

                              (I’m working on commercial product, with open source variant with slightly different use case as marketing, and … a bunch of people use the open source tool, and I’ve only gotten a single patch, ever. It’s not clear what being open source does for anyone in this particular example.)

                              1. 1

                                You have a good point, so let me answer your questions:

                                1. It is a tool meant for developers: a build system.
                                2. Everyone; it is actually crucial to the software supply chain that the source is available. If the build system is not Open Source (i.e., you can’t compile it yourself), you don’t know if it has been backdoored with a Trusting Trust attack, just like a compiler.
                                3. End users. If it’s only source-available, then companies that distribute software that builds with it could conceivably make it really hard to build their software, even if that software is FOSS or source-available.

                                But beyond the fact that it is actually crucial to be FOSS for security, there is another big reason: developers will not adopt a non-FOSS tool. If it is FOSS, it has a chance, and if it is not, then it has none.

                                1. 4

                                  There are many build tools out there that are very successful and not open source. TeamCity is a good example.

                                  1. 3

                                    But beyond the fact that it is actually crucial to be FOSS for security, there is another big reason: developers will not adopt a non-FOSS tool. If it is FOSS, it has a chance, and if it is not, then it has none.

                                    Open source isn’t a requirement for commercially successful build tools; Incredibuild is a proprietary build system used by Adobe, Amazon, Boeing, Epic Megagames, Intel, Microsoft, and many other companies. Most of the market consists of pragmatists; they’ll adopt a new product if it addresses a major pain point.

                                    Is there a distributed build tool for Rust yet? That may be a market worth pursuing.

                                    1. 1

                                      I did not expect anyone to say that closed-source build systems were used, but you and a sibling named two.

                                      As far as making a distributed build tool for Rust, yeah, I can do that. Thank you.

                                    2. 1

                                      It is a tool meant for developers: a build system.

                                      I am curious how are you planning to legally structure dual-licensing of a build system. I believe most (all?) examples of dual-licensing where one license is free/open source involve a copyleft license (commonly GPL). In order to trigger copyleft’ness the user must produce a derivative work of your software (e.g., link to your library). I don’t see how using a build system to build a project results in derivative work. I suppose there are probably some dual-licensed projects based on AGPL but that doesn’t seem to fit the build system either.

                                      I also broadly agree with what others have said about your primary concern (that the companies will steal rather than pay): companies (at least in the western economies) are happy to pay provided prices are reasonable and metrics are sensible (e.g., many would be reluctant to jump though licensing server installation, etc). But companies, especially large ones, are also often conservative/dysfunctional so expect quite a bit of admin overhead (see @kornel comment). For the level of revenue you are looking at (say, ~$300K/year), I would say you will need to hire an admin person unless you are prepared to spend a substantial chunk of your own time doing that.

                                      This is based on my experience running a software company (codesynthesis.com ) with a bunch of dual-licensed products. Ironically, quite a bit of its revenue is currently used to fund the development of a build system (build2; permissively-licensed under MIT). If you are looking to build a general-purpose build system, plan for a many-year effort (again, talking from experience). Good luck!

                                      1. 1

                                        I am curious how are you planning to legally structure dual-licensing of a build system.

                                        It will also be a library.

                                        There are plenty of places in programming where it is necessary to be able to generate tasks, order those tasks to make sure all dependencies are fulfilled, and run those tasks (hopefully as fast as possible).

                                        One such example is a init/supervision system. There are services that need to be started after certain others.

                                        (Sidenote: I’m also working on an init/supervision system, so technically, companies don’t need to make their own with my library. It’s just an example.)

                                        I suppose there are probably some dual-licensed projects based on AGPL but that doesn’t seem to fit the build system either.

                                        This build system will be distributable, like Bazel, so yes, that does apply.

                                        I also broadly agree with what others have said about your primary concern (that the companies will steal rather than pay): companies (at least in the western economies) are happy to pay provided prices are reasonable and metrics are sensible (e.g., many would be reluctant to jump though licensing server installation, etc).

                                        What are reasonable prices, though?

                                        But companies, especially large ones, are also often conservative/dysfunctional so expect quite a bit of admin overhead (see @kornel comment). For the level of revenue you are looking at (say, ~$300K/year), I would say you will need to hire an admin person unless you are prepared to spend a substantial chunk of your own time doing that.

                                        I am going to do it, yes, but I’m also going to be helped by my wife.

                                        This is based on my experience running a software company (codesynthesis.com ) with a bunch of dual-licensed products. Ironically, quite a bit of its revenue is currently used to fund the development of a build system (build2; permissively-licensed under MIT). If you are looking to build a general-purpose build system, plan for a many-year effort (again, talking from experience). Good luck!

                                        Oh, I’m cutting features out of my build system, so I don’t expect it to take that long. Also, I’m not running a business like you are.

                                        Thank you.

                                        1. 2

                                          What are reasonable prices, though?

                                          The video Designing the Ideal Bootstrapped Business has some excellent advice on pricing; the author has sold at least 3 startups.

                                  1. 13

                                    Companies do pay for software. I’m dual-licensing pngquant. It doesn’t quite pay a developer salary, but it’s worth my time.

                                    I can’t truly know how many companies use it against the license, but OTOH there are many that come to me and happily get a license.

                                    In my experience companies want to pay. They value having a written contract that guarantees they will have support and that the license is truly valid (there’s a real risk that a random free library on the net is incorrectly licensed and the code belongs to someone else, eg due to author’s employment contract.)

                                    I’m not worried at all about companies copying the code. If the task is their core competency, they will write their own thing no matter what. But when it’s not, it doesn’t make sense for them to steal the code – simply because development and maintenance is costly, and it’s taking up their resources they need for their core business. When they buy code it’s because they want to move faster and stay focused, and not reinvent the wheel.

                                    Be warned that the real work that goes into running such software business is not software development. It’s mainly sales and marketing. Answering enquiries, contract negotiation, soul-sucking SAP vendor registration forms, chasing invoices, dealing with US banking being on a different planet than everyone else, and so on.

                                    1. 2

                                      Thank you for your answer!

                                      In my experience companies want to pay. They value having a written contract that guarantees they will have support and that the license is truly valid (there’s a real risk that a random free library on the net is incorrectly licensed and the code belongs to someone else, eg due to author’s employment contract.)

                                      Wow, this completely goes against what it appears like from the outside. I hope you are right.

                                      I’m not worried at all about companies copying the code. If the task is their core competency, they will write their own thing no matter what. But when it’s not, it doesn’t make sense for them to steal the code – simply because development and maintenance is costly, and it’s taking up their resources they need for their core business. When they buy code it’s because they want to move faster and stay focused, and not reinvent the wheel.

                                      I hadn’t thought about that.

                                      1. 2

                                        Wow, this completely goes against what it appears like from the outside. I hope you are right.

                                        Companies are all different but there are a few useful rules of thumb:

                                        • If a company has done something multiple times, they will have a process for it. Anything that has a process is easy.
                                        • In general, companies want most of their employees working on what they regard as their core competency (which may not be what you think it is). They will spend money to outsource things if it helps their employees focus on the things that they’re paid to do.
                                        • Companies tend to be better at optimising for reduced risk than for maximum gain.

                                        The first of these may help you but it probably won’t. Buying software is almost certainly something that any company that you talk to has a process for. Often, this will require buying it from an approved supplier. This means that you will need to either have a reseller’s agreement with one of their approved suppliers (meaning that the reseller takes a big cut) or you will have to go through their process for approvals (this is usually not worth it unless you’re expecting a lot of revenue from the customer and can take a long time).

                                        The second point probably helps you. If you are selling a tool that makes their employees more efficient at doing whatever it is that they’re actually paid to do, then it is probably better for them to buy it from you than it is for them to develop it in house.

                                        The last point is the one that will cause you the most problems. If you are a small company (especially if you are a sole trader) then you are seen as very high risk. If your software is amazing and you get hit by a bus, what happens? Can someone else maintain it with the license that they have? How much will that cost? If they discover a critical bug, what is the chance that you’ll be able to fix it without disrupting their schedules? Is your support contract giving them measurably lower risk than just using the free version?

                                        Beyond that, are they shipping your code? If so, there are all sorts of compliance things that are easy for permissively licensed software, harder for proprietary or copyleft software. It may be cheaper (factoring in the risk) to write something and release it under a permissive license than to use your version.

                                        1. 2

                                          Wow, this completely goes against what it appears like from the outside. I hope you are right.

                                          Why Are Enterprises So Slow? explains a lot of the motivations behind typical enterprise policies. It’s common to have support contracts for everything to mitigate risk.

                                          I hadn’t thought about that.

                                          I have seen this effect firsthand at work. Several of our internal applications have been discontinued because third-party vendors released comparable products.

                                      1. 6

                                        Companies pay for software all the time. You may be reading too many headlines from one corner of the industry.

                                        Dual licensing (AKA selling exceptions) has worked and does work for many firms, large and small, both on its own and in combination with other models, like selling proprietary extensions or complementary software. I keep a very incomplete list of examples at duallicensing.com. There have been many more successful dual-licensing sales than lawsuits by dual-licensing companies against deadbeat users.

                                        Merely sprinkling a business model on top of a project with a few website changes and social media posts almost never yields meaningful money. Not with dual licensing, not with open core, not with services or any other model. You need a model and you need to push. Going into business is adding a whole ’nother project to your life.

                                        Driving paid-license sales will take time and energy. That is time and energy you will not also be able to spend on your software. On the upside, paid-license sales can take substantially less time and energy than developing complementary products, hosting, developing closed, one-off software on contract, or providing high-touch professional services like training. Your project is your project and there won’t be any business need to segment it into free and paid chunks, since what you’re selling is fundamentally permissions, not bits.

                                        1. 1

                                          Dual licensing (AKA selling exceptions) has worked and does work for many firms, large and small, both on its own and in combination with other models, like selling proprietary extensions or complementary software. I keep a very incomplete list of examples at duallicensing.com.

                                          I don’t know how many of your examples are actually small people, but I do know of one: VideoLAN. And the link I had in my original post was the VideoLAN guy talking about how it really hasn’t worked out very well. So while you have examples (and thank you for them; I’m going through them now), I’m a little nervous about how effectively those examples actually make money.

                                          There have been many more successful dual-licensing sales than lawsuits by dual-licensing companies against deadbeat users.

                                          I’ll have to take your lawyer’s word for that, but I do wonder if that’s just because the threat is enough from bigger entities. If I, as an individual, am not enough of a threat, would they care enough to pay as required? I don’t really know.

                                          Merely sprinkling a business model on top of a project with a few website changes and social media posts almost never yields meaningful money. Not with dual licensing, not with open core, not with services or any other model. You need a model and you need to push. Going into business is adding a whole ’nother project to your life.

                                          Agreed. My current business model plan is two-fold: licensing and on-call support. I just don’t think people will pay for that if they can get away with not paying.

                                          Driving paid-license sales will take time and energy. That is time and energy you will not also be able to spend on your software. On the upside, paid-license sales can take substantially less time and energy than developing complementary products, hosting, developing closed, one-off software on contract, or providing high-touch professional services like training.

                                          I do understand that driving sales takes time and energy. Unfortunately, that’s just what I’m going to have to do to make money. I’d rather spend half my time on that than all of my time on someone else’s software.

                                          Your project is your project and there won’t be any business need to segment it into free and paid chunks, since what you’re selling is fundamentally permissions, not bits.

                                          Are you saying I should keep it closed source? I’m not entirely sure what you are saying here.

                                          1. 2

                                            If you think your potential customers are a bunch of big companies and you’re afraid of big companies, I’d suggest you reach out to some founders at companies that successfully license big companies. Or find another line of business.

                                            If you’re looking for validation of the idea that dual licensing doesn’t work because large companies are all big meanies who don’t play fair, I can’t corroborate. I’m sure it happens. And probably more often where the developer obviously lacks spine and cowers. But the dual licensing failure cases I see have a lot more to do with more basic business faults.

                                            1. 1

                                              If you think your potential customers are a bunch of big companies and you’re afraid of big companies, I’d suggest you reach out to some founders at companies that successfully license big companies. Or find another line of business.

                                              That is a fair criticism. I’ll take the L, and I’ll see about doing as you said.

                                              If you’re looking for validation of the idea that dual licensing doesn’t work because large companies are all big meanies who don’t play fair, I can’t corroborate. I’m sure it happens.

                                              My wife, the one with business sense, thinks it won’t work because of this, so it’s not just me. In fact, I was pretty idealistic about it until a month ago. She tried to get me to see sense for years, and I’ve only recently come around.

                                              And probably more often where the developer obviously lacks spine and cowers.

                                              If I had the resources to go after companies in the case that they violated my license, I would happily “grow a spine” and continue with my work. But I don’t have the resources because a lawyer like you doesn’t come cheap.

                                              But the dual licensing failure cases I see have a lot more to do with more basic business faults.

                                              I believe it. I’ve been taking potential business ideas to my wife for years, and having the business sense that she does, she has shot them all down. So I could see it being hard to find the right one.

                                              In other words, I guess I have not found the right one. Good to know.

                                              1. 2

                                                Wasn’t trying to talk you down. But you came in with a question based on a presupposition that contradicts my experience. For what it’s worth, I’m a deals lawyer, not a lawsuits lawyer.

                                                I have seen founders and salespeople have to push on large company users who weren’t initially willing to deal. When the vendor is small, that is definitely an asymmetric conflict. If you find yourself on the smaller side of an asymmetric conflict, you can’t think just in terms of all the big-side resources you don’t have, like how many dollars or bodies or lawyers they have that you don’t. You have to work other leverage. Go talk to founders that have won some of those battles.

                                                For what it’s worth, the VideoLAN comment you cited seemed to have a lot more to say about lack of interest in technical support contracts than dual licensing. That fits with my perception of their software’s primary use case and license choice, which don’t put a lot of users in positions where they need other license terms.

                                                It’s hard to sell tech support for reliable, well documented software. It’s relatively easy to sell technical support to large companies with urgent problems.

                                        1. 2

                                          I use a Keyboardio Model 01.

                                          Because I use Dvorak, I have a layer for Qwerty, in case my wife needs to use the keyboard.

                                          Then, of course, I have a layer for the extra keys, including imitating common Qwerty keyboard shortcuts like Ctrl+c, Ctrl+v, etc.

                                          But my real innovation is having a layer that basically pastes the reverse of right side of the keyboard onto the left side, and it is activated by a chord on the left side.

                                          What this means is that when I am using certain shortcut-heavy programs that also require mouse, like Blender, I can control the mouse with my right hand while handling nearly all keyboard work with my left hand. It saves me a lot of moving my right hand between the mouse and the keyboard.

                                          1. 4

                                            Since companies steal Open Source software without a care in the world, what’s to stop companies from stealing Rig and embedding it into their proprietary software?

                                            Nothing. Nothing has ever been able to prevent this. Even if you used a nonfree license they could do this if they thought the benefit to them outweighed the odds that you can afford to sue.

                                            But what’s the alternative? Not make software at all? I’d rather help people than not, and also let’s be honest I make software for me and because I can’t not.

                                            1. 2

                                              Author here.

                                              I understand your position. It was my position up until a week ago.

                                              But I fear that if companies steal my code, they may end up doing far more harm than my code does good otherwise. I could be overthinking it, but it’s also a build system which could be used to backdoor just about anything.

                                              1. 3

                                                But as I said, what is the alternative? Maybe you personally have the option to just not work on software. I don’t really, I’m not sure who I would even be. But moreover, if everyone who cares stops trying then all software would be evil all the time and that seems… Worse?

                                                1. 0

                                                  You have really good points.

                                                  I guess the question is: can companies be as creative as Open Source authors?

                                                  If so, then yes, it would be worse not to have Open Source anyway because we would have the same software, just evil no matter what.

                                                  If not, then Open Source may actually provide creativity for the evil those companies do because without Open Source, they may not be able to accomplish as much.

                                                  I don’t know the answer to the above question. I guess my discouragement is based on the feeling that Open Source is fueling the evil creativity of companies. If someone could show me that that feeling is wrong, I’d be so happy.

                                                  1. 1

                                                    “Open Source software I do write could end up harming more users than it helps.”

                                                    I think structure is important. Structure, moreso than license, governs how software will be used. I never really believed in the copyleft licenses providing any sort of ethical insurance or even guaranteeing that modifications get contributed back. I agree that copyleft is mostly just wishful thinking when the rubber hits the road.

                                                    BUT! Software is much more than its license. Can some greedy or power hungry group “steal” BitTorrent or ipfs and make tons of money or collect tons of data from it? I’m sure there are examples, but surely they are minor/marginal compared to the prevelant “evil” uses of “big data” software like Apache Cassandra or Hadoop.

                                                    I believe working on software IS worth it, and in fact, it’s really important: the way the world is structured right now sucks for a lot of people!! A lot of that comes from the structure of the software. I can imagine a lot of software that works for the common man, but wouldn’t benefit the rich and greedy. Such software would be structured differently, it would have different goals / constraints, and most likely it wouldn’t make much money.

                                                    Critically, such software does not exist right now. Or at least it is incomplete or in disrepair. Its up to us to build it / fix it. Open Source just happens to be the most practical way to do that.

                                                    1. 1

                                                      I hope you’re right, but I don’t see how to structure Open Source in such a way to prevent companies from stealing it.

                                                      1. 2

                                                        If you’re hell-bent on this goal, try writing software that is useful to, and extensible by, regular people and not super useful to beg tech. For example, most of the tech produced at JMP.chat is not useful to big tech because it only does thing they don’t want and that don’t make sense at Big Scale but the stack and product are super useful to the individuals who use it.

                                            1. 47

                                              It doesn’t feel like a solid line of thinking.

                                              From his conclusion:

                                              • “I can’t get a job” - Apparently because all software companies are evil. Except not all of them are. And there are plenty nonprofits out there that are really trying to help people. He’ll just have to take a small paycut.

                                              • “I can’t make money from writing Open Source software” - Sure, it’s harder, but not impossible. Also, that’s not really the point of OSS.

                                              • “Open Source software I do write could end up harming more users than it helps.” - Yes, software can be used for bad purposes. So can hammers. Is it unethical to create hammers?

                                              There are real issues with open-source, that can be seen in some of the examples he gives, but he doesn’t address them in any meaningful way. Sorry if I’m being insensitive, but it sounds more like whining than philosophy to me.

                                              1. 38

                                                Nothing on the site feels like a solid line of thinking to me. The author believes and argues, at length, for things that are demonstrably false. For example, they think that vaccines are more dangerous than COVID-19.

                                                But I think you’re over-reading the piece. There’s an implied “For Me” in the question the title asks. Clearly this individual can’t get a job working on FOSS, they can’t otherwise make money from it, and they fear that it harms users in some way more than it helps them. If they’ve done that math, it obviously isn’t worth working on it for them.

                                                I don’t think it’s generally applicable either, and I think your analysis of it is correct in that context. But I’m not sure that argument was being made.

                                                1. 10

                                                  “Releasing open source software under capitalism just leads to exploitation. But releasing closed source software under capitalism is even worse! I guess there just isn’t any good solution here.”

                                                  Maybe the problem isn’t the software, my dude.

                                                  1. 4

                                                    Author here.

                                                    It doesn’t feel like a solid line of thinking.

                                                    I fully admit it’s not; this is a post where I put feelings in more than anything else. I feel discouraged because I always believed that I could make a good difference in the world with FOSS. That belief has been recently challenged, and I’m feeling a little lost as a result.

                                                    From his conclusion:

                                                    Perhaps I should not have mentioned any of that, but the real point of mentioning them was to say that I had to leave the software industry earlier, but not software. Now, it feels like I have to leave software itself.

                                                    Yes, software can be used for bad purposes. So can hammers. Is it unethical to create hammers?

                                                    No, it’s not, and I guess I did not articulate well. What I was trying to say is that, even though I try to write my code as ethically as possible, it would still do more harm to users. I think software is different from hammers in that the harm that can be done by them scales far faster than the harm that can be done by things like hammers.

                                                    For example, a hammer can be wielded for harm, but it can only harm as many people as the bad guy can personally reach. But software can be taken by a global company and harm billions in the process.

                                                    And that’s not even to say that create software is unethical; I don’t think it would be unethical for me to make Rig and to make it Open Source because I would try to do everything I can to make it helpful while preventing harm.

                                                    It just feels like the harm scales so much that no matter the ethics, it could end up accomplishing the opposite of what I want.

                                                    There are real issues with open-source, that can be seen in some of the examples he gives, but he doesn’t address them in any meaningful way.

                                                    I admit in the post that I don’t have answers. Instead the real point of the post was the last line inviting people to contact me with their thoughts.

                                                    1. 1

                                                      Merry Christmas, Gavin. This problem is solved. It will be rolled out after the New Year. Up to individual maintainers to adopt it. I can let you know when it drops in case you miss it.

                                                      1. 1

                                                        I presume you mean the problem of scalable harm? Your post is really cryptic.

                                                        Regardless, I’m intrigued, so yes, please let me know when it drops.

                                                  1. 7

                                                    I feel kinda bad about how far my angry rant about the state of the industry has gone and how many people it has touched. I’m sorry if my angry nihilistic feelings influenced this line of thinking at all. I don’t know what to do about it.

                                                    1. 7

                                                      Oh, no, you didn’t make me feel this way.

                                                      Behind the scenes, what really happened is that your article (among others) woke me up to what companies actually do. I asked my wife, and it turns out that she had been trying to get me to understand this for years; my idealism had just been blinding me to it. It was talking to my wife that actually brought the discouragement, though you could say it was really ignorance that caused it by blinding me to reality until reality slapped me.

                                                      1. 3

                                                        And yet, you have not really harmed people. Without giving a parade of horribles, I think that you already know about the crimes and harms being perpetuated by our industry, and writing an angry rant does not stack up to what any such paradegoer has done.

                                                        It is typical and understandable that reading or writing about history, including the history of our field, is uncomfortable and produces negative feelings. But you must remember that the endless positivity of our society serves to shame historians for their honesty without fixing the problems of the past.

                                                      1. 4

                                                        Yes. It’s not worth working on open source without compensation, but Free Software is still worthwhile.

                                                        I notice that you did not mention AGPL or other licenses which are known to repel corporations; why not?

                                                        1. 5

                                                          Author here.

                                                          I notice that you did not mention AGPL or other licenses which are known to repel corporations; why not?

                                                          GitHub Copilot. If it weren’t for that, I’d license my current code under AGPL and relicense later after getting my licenses reviewed by a lawyer.

                                                          Good question; sorry for not making it clear.

                                                          1. 8

                                                            Minor tip: your reply name is in blue if you authored the piece, so you don’t need to worry about telling people you’re the author.

                                                            1. 4

                                                              That’s a nice feature.

                                                              Thank you for telling me, and sorry about that.

                                                              1. 1

                                                                No need to apologize! I’m letting you know so you don’t have to worry about letting people know ☺️

                                                        1. 3

                                                          My experience with Nix in the past has been slightly less advanced/dynamic (mainly NixOS and simple packages) but the performance point was a major factor to me. I understand that Flakes are meant to address some of this, but as it stands, Nix evaluations can get really slow. I’d personally love to see something closer to the speed of an apk add.

                                                          I’d be curious if there is a “simpler” version of Nix that could exist which gets speed ups from different constraints. For example, I’ve found please to be faster than most bazel projects, partly due to being written in go and having less of a startup cost, but also because the build setup seems to be simpler.

                                                          I think that the root of the problem might be that Nix is a package build system, not a development build system, and so is build with different assumptions. I wonder if there’s a way to build a good tool that does both package builds (tracks package dependencies, build binary artifacts, has install hooks) and a build tool (tracks file dependencies, has non-build rules such as linting, and caches artifacts for dev not installation). I’m just spitballing but it seems to me like trying to reconcile these two different systems might force a useful set of constraints that results in a fast & simple build system? (though it could just as easily go the other way and become unwieldy and complex).

                                                          1. 10

                                                            Nix is a package build system, not a development build system

                                                            Ah, but this is exactly the point :-)

                                                            There is nothing fundamental about Nix that prevents it from covering both, other than significant performance costs of representing the large build graphs of software itself (rather than a simplified system build graph of wrapped secondary build systems). At TVL we have buildGo and buildLisp as “Bazel-like” build systems written in Nix, and we do use them for our own tools, but evaluation performance suffers significantly and stops us from adding more development-focused features that we would like to see.

                                                            In fact this was a big driver behind the original motivation that led to us making a Nix fork, and then eventually starting Tvix.

                                                            1. 6

                                                              I wonder if there’s a way to build a good tool that does both package builds (tracks package dependencies, build binary artifacts, has install hooks) and a build tool (tracks file dependencies, has non-build rules such as linting, and caches artifacts for dev not installation).

                                                              I believe there is! It mostly comes from a paper called A Sound and Optimal Incremental Build System with Dynamic Dependencies (PDF), which is not my work (although I’m currently working on an implementation of the ideas).

                                                              There are three key things needed:

                                                              1. Dynamic dependencies.
                                                              2. Flexible “file stamps.”
                                                              3. Targets can execute arbitrary code, instead of just commands.

                                                              The first item is needed based on the fact that dependencies can change based on the configuration needed for the build of a package. Say you have a package that needs libcurl, but only if users enable network features.

                                                              It is also needed to import targets from another build. I’ll use the libcurl example above. If your package’s build target has libcurl as a dependency, then it should be able to import libcurl’s build files and then continue the build making the dependencies of libcurl’s build target dependencies of your package’s build targets.

                                                              In other words, dynamic dependencies allow a build to properly import the builds of its dependencies.

                                                              The second item is the secret sauce and is, I believe, the greatest idea from the paper. The paper calls them “file stamps,” and I call them “stampers.” They are basically arbitrary code that returns a Boolean showing whether or not a target needs updating or not.

                                                              A Make-like target’s stampers would check if the file mtime is less than any of its dependencies. A more sophisticated one might check that any file attributes of a target’s dependencies have changed. Another might hash a file.

                                                              The third is needed because otherwise, you can’t express some builds, but tying it with dynamic dependencies is also the bridge between building in the large (package managers) and building in the small (“normal” build systems).

                                                              Why does this tie it all together? Well, first consider trying to implement a network-based caching system. In most build systems, it’s a special thing, but in a build system with the above the things, you just need write a target that:

                                                              1. Uses a custom stamper that checks the hash of a file, and if it is changed, checks the network for a cached built version of the new version of the file.
                                                              2. If such a cache version exists, make updating that target mean downloading the cache version; otherwise, make updating the target mean building it as normal.

                                                              Voila! Caching in the build system with no special code.

                                                              That, plus being able to import targets from other build files is what ties packages together and what allows the build system to tie package management and software building together.

                                                              I’ll leave it as an exercise to the reader to figure out how such a design could be used to implement a Nix-like package manager.

                                                              (By the way, the paper uses special code and a special algorithm for handling circular dependencies. I think this is a bad idea. I think this problem is neatly solved by being able to run arbitrary code. Just put mutually dependent targets into the same target, which means targets need to allow multiple outputs, and loop until they reach a fixed point.)

                                                              I’m just spitballing but it seems to me like trying to reconcile these two different systems might force a useful set of constraints that results in a fast & simple build system?

                                                              I think that design is simple, but you can judge for yourself. As to whether it’s fast, I think that comes down to implementation.

                                                              1. 2

                                                                If categorised in the terminology of the ‘build systems a la carte’ paper (expanded jfp version from 2020), where would your proposal fit? Though you havn’t mentioned scheduling or rebuilding strategies (page 27).

                                                                1. 4

                                                                  That is a good question.

                                                                  To be able to do dynamic dependencies, you basically have to have a “Suspending” scheduler strategy, so that’s what mine will have.

                                                                  However, because targets can run arbitrary code, and because stampers can as well, my build system doesn’t actually fit in one category because different stampers could implement different rebuilding strategies. In fact, there could be stampers for all of the rebuilding strategies.

                                                                  So, technically, my build system could fill all four slots under the “Suspending” scheduler strategy in the far right column in table 2 on page 27.

                                                                  In fact, packages will probably be build files that use deep constructive traces, thus making my build system act like Nix for packages, while in-project build files will use any of the other three strategies as appropriate. For example, a massive project run by Google would probably use “Constructive Traces” for caching and farming out to a build farm, medium projects would probably use “Verifying Traces” to ensure the flakiness of mtime didn’t cause unnecessary cleans, and small projects would use “Dirty Bit” because the build would be fast enough that flakiness wouldn’t matter.

                                                                  This will be what makes my build system solve the problem of scaling from the smallest builds to medium builds to the biggest builds. That is, if it actually does solve the scaling problem, which is a BIG “if”. I hope and think it will, but ideas are cheap; execution is everything.

                                                                  Edit: I forgot to add that my build system also will have a feature that allows you to limit its power in a build script along three axes: the power of targets, the power of dependencies, and the power of stampers. Limiting the last one is what causes my build system to fill those four slots under the “Suspending” scheduler strategy, but I forgot about the ability to limit the power of dependencies. Basically, you can turn off dynamic dependencies, which would effectively make my build system use the “Topological” scheduler strategy. Combine that with the ability to fill all four rebuilder strategy slots, and my build system will be able to fill 8 out of the 12.

                                                                  Filling the other four is not necessary because anything you can do with a “Restarting” scheduler you can do with a “Suspending” scheduler. And restarting can be more complicated to implement.

                                                            1. 4

                                                              This is an interesting thread on making Makefiles which are POSIX-compatible. The interesting thing is that it’s very hard or impossible, at least if you want to keep some standard features like out-of-tree builds. I’ve never restricted myself to write portable Makefiles (I use GNU extensions freely), but I previously assumed it wasn’t that bad.

                                                              That this is so hard is maybe a good example of why portability to different dependencies is a bad goal when your dependencies are already open source and portable. As many posters in the thread say, you can just use gmake on FreeBSD. The same goes for many other open source dependencies: If the software is open source, portability to alternatives to that software is not really important.

                                                              1. 4

                                                                you can just use gmake on FreeBSD.

                                                                I can, but I don’t want to.

                                                                If you want to require any specific tool or dependecy, fine, that’s your prerogative, just don’t force your idea of the tool’s cost on me. Own your decision, if it impacts me, be frank about it, just don’t bullshit me that it doesn’t impact me just because the cost for you is less than the cost for me.

                                                                The question of why don’t you use X instead of Y is nobody’s business but mine. I fully understand and expect that you might not care about Y, please respect my right not to care about X.

                                                                1. 11

                                                                  That’s very standard rhetoric about portability, but the linked thread shows it’s not so simple in this case: It’s essentially impossible to write good, portable Makefiles.

                                                                  1. 5

                                                                    Especially considering how low cost using GNU Make is, over i.e. switching OS/architecture.

                                                                    1. 2

                                                                      It’s just as easy to run BSD make on Linux as it is to run GNU make on BSDs, yet if I ship my software to Linux users with a BSD makefile and tell them to install BSD make, there will hardly be a person who wouldn’t scorn at the idea.

                                                                      Yet Linux users expect BSD users not to complain when they do exact same thing.

                                                                      Why is this so hard to understand, the objection is not that you have to run some software dependency, the objection is people telling you that you shouldn’t care about the nature of the dependency because their cost for that dependency is different than yours.

                                                                      I don’t think that your software is bad because it uses GNU make, and I don’t think that using GNU make makes you a bad person, but if you try to convince me that “using GNU make is not a big deal”, then I don’t want to ever work with you.

                                                                      1. 2

                                                                        Are BSD makefiles incompatible with GNU make? I actually don’t know.

                                                                        1. 2

                                                                          The features, syntax, and semantics of GNU and BSD make are disjoint. Their intersection is POSIX make, which has almost no features.

                                                                          …but that’s not the point at all.

                                                                          1. 2

                                                                            If they use BSD specific extensions then yes

                                                                      2. 2

                                                                        Posix should really standardize some of GNU make’s features (e.g. pattern rules) and/or the BSDs should just adopt them.

                                                                        1. 5

                                                                          I get the vibe at this point that BSD intentionally refuses to make improvements to their software specifically because those improvements came from GNU, and they really hate GNU.

                                                                          Maybe there’s another reason, but why else would you put up with a program that is missing such a critically important feature and force your users to go thru the absurd workarounds described in the article when it would be so much easier and better for everyone to just make your make better?

                                                                          1. 4

                                                                            I get the vibe at this point that BSD intentionally refuses to make improvements to their software specifically because those improvements came from GNU, and they really hate GNU.

                                                                            Really? I’ve observed the opposite. For example, glibc refused to adopt the strl* functions from OpenBSD’s libc, in spite of the fact that they were useful and widely implemented, and the refusal to merge them explicitly called them ‘inefficient BSD crap’ in spite of the fact that they were no less efficient than existing strn* functions. Glibc implemented the POSIX _l-suffixed versions but not the full set from Darwin libc.

                                                                            In contrast, you’ll find a lot of ‘added for GNU compatibility’ functions in FreeBSD libc, the *BSD utilities have ‘for GNU compatibility’ in a lot of places. Picking a utility at random, FreeBSD’s du [has two flags that are listed in the man page as first appearing in the GNU version], whereas GNU du does not list any as coming from BSDs (though -d, at least, was originally in FreeBSD’s du - the lack of it in GNU and OpenBSD du used to annoy me a lot since most of my du invocations used -d0 or -d1).

                                                                            1. 2

                                                                              The two are in no way mutually exclusive.

                                                                            2. 1

                                                                              Maybe there’s another reason, but why else would you put up with a program that is missing such a critically important feature and force your users to go thru the absurd workarounds described in the article when it would be so much easier and better for everyone to just make your make better?

                                                                              Every active software project has an infinite set of possible features or bug fixes; some of them will remain unimplemented for decades. glibc’s daemon function, for example, has been broken under Linux since it was implemented. The BSD Make maintainers just have a different view of the importance of this feature. There’s no reason to attribute negative intent.

                                                                              1. 1

                                                                                The BSD Make maintainers just have a different view of the importance of this feature

                                                                                I mean, I used to think that too but after reading the article and learning the details I have a really hard time continuing to believe that. we’re talking about pretty basic everyday functionality here.

                                                                              2. 1

                                                                                Every BSD is different, but most BSDs are minimalist-leaning. They don’t want to add features not because GNU has them, but because they only want to add things they’ve really decided they need. It’s an anti-bloat philosophy.

                                                                                GNU on the other hand is basically founded in the mantra “if it’s useful then add it”

                                                                                1. 6

                                                                                  I really don’t understand the appeal of the kind of philosophy that results in the kind of nonsense the linked article recommends. Why do people put up with it? What good is “anti-bloat philosophy” if it treats “putting build files in directories” as some kind of super advanced edge case?

                                                                                  Of course when dealing with people who claim to be “minimalist” it’s always completely arbitrary where they draw the line, but this is a fairly clear-cut instance of people having lost sight of the fact that the point of software is to be useful.

                                                                                  1. 4

                                                                                    The article under discussion isn’t the result of a minimalist philosophy, it’s the result of a lack of standardisation. BSD make grew a lot of features that were not part of POSIX. GNU make also grew a similar set of features, at around the same time, with different syntax. FreeBSD and NetBSD, for example, both use bmake, which is sufficiently powerful to build the entire FreeBSD base system.

                                                                                    The Open Group never made an effort to standardise any of them and so you have two completely different syntaxes. The unfortunate thing is that both GNU Make and bmake accept all of their extensions in a file called Makefile, in addition to looking for files called GNUmakefile / BSDmakefile in preference to Makefile, which leads people to believe that they’re writing a portable Makefile and complain when another Make implementation doesn’t accept it.

                                                                          2. 7

                                                                            But as a programmer, I have to use some build system. If I chose Meson, that’d be no problem; you’d just have to install Meson to build my software. Ditto if I chose cmake. Or mk. Why is GNU make any different here? If you’re gonna wanna compile my software, you better be prepared to get my dependencies onto your machine, and GNU make is probably gonna be one of the easiest build systems for a BSD user to install.

                                                                            As a Linux user, if your build instructions told me to install bsdmake or meson or any other build system, I wouldn’t bat an eye, as long as that build system is easy to install from my distro’s repos.

                                                                            1. 3

                                                                              Good grief, why is this so difficult to get through? If you want to use GNU make, or Meson, or whatever, then do that! I use GNU make too! I also use Plan 9’s mk, which few people have installed, and even fewer would want to install. That’s not the point.

                                                                              The problem here has nothing to do with intrinsic software properties at all, I don’t know why this is impossible for Linux people to understand.

                                                                              If you say “I am using GNU make, and if you don’t like it, tough luck”, that’s perfectly fine.

                                                                              If you say “I am using GNU make, which can’t cause any problem for you because you can just install it” then you are being ignorant of other people’s needs, requirements, or choices, or you are being arrogant for pretending other people’s needs, requirements, or choices are invalid, and of course in both cases you are being patronizing towards users you do not understand.

                                                                              This has nothing to do with GNU vs. BSD make. It has nothing to do with software, even. It’s a social problem.

                                                                              if your build instructions told me to install bsdmake or meson or any other build system, I wouldn’t bat an eye, as long as that build system is easy to install from my distro’s repos.

                                                                              And this is why Linux users do not understand the actual problem. They can’t fathom that there are people for whom the above way of doing things is unacceptable. It perfectly fine not to cater to such people, what’s not fine is to demand that their reasoning is invalid. There are people to whom extrinsic properties of software are far more important than their intrinsic properties. It’s ironic that Linux people have trouble understanding this, given this is the raison d’etre for the GNU project itself.

                                                                              1. 5

                                                                                I think the question is “why is assuming gmake is no big deal any different than assuming meson is no big deal?” And I think your answer is “those aren’t different, and you can’t assume meson is no big deal” but you haven’t come out and said that yet.

                                                                            2. 1

                                                                              I can, but I don’t want to.

                                                                              Same. Rewriting my Makefiles is so annoying, that so far I have resigned to just calling gmake on FreeBSD. Maybe one day I will finally do it. I never really understood how heavily GNUism “infected” my style of writing software, until I switched to the land of the BSD.

                                                                            3. 2

                                                                              What seems to irk BSD users the most is putting gnuisms in a file called Makefile; they see the file and expect to be able to run make, yet that will fail. Naming the file GNUMakefile is an oft-accepted compromise.

                                                                              I admit I do not follow that rule myself, but if I ever thought a BSD user would want to use my code, I probably would follow it, or use a Makefile-generator.

                                                                              1. 4

                                                                                I’d have a lot more sympathy for this position if BSD make was actually good, but their refusal to implement pattern rules makes it real hard to take seriously.

                                                                                1. 2

                                                                                  I’d have a lot more sympathy for this position if BSD make was actually good

                                                                                  bmake is able to build and install the complete FreeBSD source tree, including both kernel and userland. The FreeBSD build is the most complex make-based build that I’ve seen and is well past the level of complexity where I think it makes sense to have hand-written Makefiles.

                                                                                  For the use case in mind, it’s worth noting that you don’t need pattern rules, bmake puts things in obj or $OBJDIRPREFIX by default.

                                                                              2. 1

                                                                                That this is so hard is maybe a good example of why portability to different dependencies is a bad goal when your dependencies are already open source and portable.

                                                                                I mean, technically you are right, but in my opinion, you are wrong because of the goal of open source.

                                                                                The goal of open source is to have as many people as possible using your software. That is my premise, and if it is wrong, the rest of my post does not apply.

                                                                                But if that is the goal, then portability to different dependencies is one of the most important goals! The reason is because it is showing the user empathy. Making things as easy as possible for users is being empathetic towards them, and while they may not notice that you did it, subconsciously, they do. They don’t give up as easily, and in fact, sometimes they even put extra effort in.

                                                                                I saw this when porting my bc to POSIX make. I wrote a configure script that uses nothing other than POSIX sh. It was hard, mind you, I’m not denying that.

                                                                                But the result was that my bc was so portable that people started using on the BSD’s without my knowledge, and one of those users decided to spend effort to demonstrate that my bc could make serious performance gains and help me to realize them once I made the decision to pursue that. He also convinced FreeBSD to make my bc the system default for FreeBSD 13.

                                                                                Having empathy for users, in the form of portability, makes some of them want to give back to you. It’s well worth it, in my opinion. In fact, I just spent two days papering over the differences between filesystems on Windows and on sane platforms so that my next project could be portable enough to run on Windows.

                                                                                (Oh, and my bc was so portable that porting it to Windows was little effort, and I had a user there help me improve it too.)

                                                                                1. 5

                                                                                  The goal of open source is to have as many people as possible using your software.

                                                                                  I have never heard that goal before. In fact, given current market conditions, open source may not be the fastest way if that is your goal. Millions in VC to blow on marketing does wonders for user aquisition

                                                                                  1. 1

                                                                                    That is true, but I’d also prefer to keep my soul.

                                                                                    That’s the difference. One is done by getting users organically, in a way that adds value. The other is a way to extract value. Personally, I don’t see Open Source as having an “extract value” mindset in general. Some people who write FOSS do, but I don’t think FOSS authors do in general.

                                                                                  2. 4

                                                                                    The goal of open source is to have as many people as possible using your software.

                                                                                    I actually agree with @singpolyma that this isn’t necessarily a goal. When I write software and then open source it, it’s often stuff I really don’t want many people to use: experiments, small tools or toys, etc. I mainly open source it because the cost to me of doing it is negligible, and I’ve gotten enough neat random bits and pieces of fun or interesting stuff out of other people’s weird software that I want to give back to the world.

                                                                                    On the other hand, I’ve worked on two open source projects whose goal was to be “production-quality” solutions to certain problems, and knew they weren’t going to be used much if they weren’t open source. So, you’re not wrong, but I’d turn the statement around: open source is a good tool if you want as many people as possible using your software.

                                                                                1. 5

                                                                                  I find myself reluctantly agreeing with most of the article, which makes me sad. Nevertheless, I would like to be pragmatic about this.

                                                                                  That said, I think that most of the problems with the GPL can be sufficiently mitigated if we just remove the virality. In particular, I don’t think that copyleft is the problem.

                                                                                  The reason is because I believe that without the virality, companies would be willing to use copyleft licenses since the requirements for compliance would literally be “publish your changes.” That’s a low bar and especially easy in the world of DVCS’s and GitHub.

                                                                                  However, I could be wrong, so if I am, please tell me how.

                                                                                  1. 10

                                                                                    The problem with ‘non-viral’ copyleft licenses (more commonly known as ‘per-file copyleft’ licenses) is that they impede refactoring. They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers. Oh, and if you use them you’re typically flamed by the FSF because I don’t think anyone has managed to write a per-file copyleft license that is GPL-compatible (Mozilla got around this by triple-licensing things).

                                                                                    That said, I think one of the key parts of this article is something that I wrote about 15 or so years ago: From an end-user perspective, MS Office better meets a bunch of the Free Software Manifesto requirements than OpenOffice. If I find a critical bug in either then, as an experienced C++ programmer, I still have approximately the same chance of fixing it in either: zero. MS doesn’t let me fix the MS Office bug[1] but I’ve read some of the OpenOffice code and I still have nightmares about it. For a typical user, who isn’t a C++ programmer, OpenOffice is even more an opaque blob.

                                                                                    The fact that MS Office is proprietary has meant that it has been required to expose stable public interfaces for customisation. This means that it is much easier for a small company to maintain a load of in-house extensions to MS Office than it is to do the same for most F/OSS projects. In the ‘90s, MS invested heavily in end-user programming tools and as a result it’s quite easy for someone with a very small amount of programming experience to write some simple automation for their workload in MS Office. A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand. There is really no reason that anything other than a core bit of compute-heavy code for any desktop or mobile app needs to be written in C/C++/Rust, when it could be in interpreted Python or Lua without any user-perceptible difference in performance.

                                                                                    Even the second-source argument (which is really compelling to a lot of companies) doesn’t really hold up because modern codebases are so huge. Remember that Stallman was writing that manifesto back when a typical home computer such as the BBC Model B was sufficiently simple that a single person could completely understand the entire hardware and software stack and a complete UNIX system could be written by half a dozen people in a year (Minix was released a few years later and was written by a single person, including kernel and userland. It was around 15,000 lines of code). Modern software is insanely complicated. Just the kernel for a modern *NIX system is millions of lines of code, so is the compiler. The bc utility is a tiny part of the FreeBSD base system (if memory serves, you wrote it, so should be familiar with the codebase) and yet is more code than the whole of UNIX Release 7 (it also has about as much documentation as the entire printed manual for UNIX Release 7).

                                                                                    In a world where software is this complex, it might be possible for a second company to come along and fix a bug or add a feature for you but it’s going to be a lot more expensive for them to do it than the company that’s familiar with the codebase. This is pretty much the core of Red Hat’s business model: they us Fedora to push core bits of Red Hat-controlled code into the Linux ecosystem, make them dependencies for everything, and then can charge whatever the like for support because no one else understands the code.

                                                                                    From an end-user perspective, well-documented stable interfaces with end-user programming tools give you the key advantages of Free Software. If there are two (or more) companies that implement the same stable interfaces, that’s a complete win.

                                                                                    F/OSS also struggles with an economic model. Proprietary software exists because we don’t have a good model for any kind of zero-marginal-cost goods. Creating a new movie, novel, piece of investigative journalism, program, and so on, is an expensive activity that needs funding. Copying any of these things has approximately zero cost, yet we fund the former by charging for the latter. This makes absolutely no sense from any rational perspective yet it is, to date, the only model that has been made to work at scale.

                                                                                    [1] Well, okay, I work at MS and with the whole ‘One Microsoft’ initiative I can browse all of our internal code and submit fixes, but this isn’t an option for most people.

                                                                                    1. 3

                                                                                      The fact that MS Office is proprietary has meant that it has been required to expose stable public interfaces for customisation. This means that it is much easier for a small company to maintain a load of in-house extensions to MS Office than it is to do the same for most F/OSS projects. In the ‘90s, MS invested heavily in end-user programming tools and as a result it’s quite easy for someone with a very small amount of programming experience to write some simple automation for their workload in MS Office. A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand. There is really no reason that anything other than a core bit of compute-heavy code for any desktop or mobile app needs to be written in C/C++/Rust, when it could be in interpreted Python or Lua without any user-perceptible difference in performance.

                                                                                      I’ve found this to be true for Windows too, as I wrote in a previous comment. I technically know how to extend the Linux desktop beyond writing baubles, but it’s shifting sands compared to how good Windows has been with extensibility. I’m not going to maintain a toolkit or desktop patchset unless I run like, Gentoo.

                                                                                      BTW, from your other reply:

                                                                                      I created a desktop environment project around this idea but we didn’t have sufficient interest from developers to be able to build anything compelling. F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).

                                                                                      I suspect this is why it never built a tool something like Access/HyperCard/Excel/etc. that empower end users - because they don’t need it, because they are developers. Arguably, the original sin of free software (is assuming users are developers), and in a wider sense, why its threat model drifted further from reality.

                                                                                      1. 2

                                                                                        The problem with ‘non-viral’ copyleft licenses (more commonly known as ‘per-file copyleft’ licenses) is that they impede refactoring. They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers.

                                                                                        Is it possible to have a non-viral copyleft license that is not per-file? I hope so, and I wrote licenses to do that which I am going to have checked by a lawyer. If he says it’s impossible, I’ll have to give up on that.

                                                                                        Oh, and if you use them you’re typically flamed by the FSF because I don’t think anyone has managed to write a per-file copyleft license that is GPL-compatible (Mozilla got around this by triple-licensing things).

                                                                                        Eh, I’m not worried about GPL compatibility. And I’m not worried about being flamed by the FSF.

                                                                                        That said, I think one of the key parts of this article is something that I wrote about 15 or so years ago: From an end-user perspective, MS Office better meets a bunch of the Free Software Manifesto requirements than OpenOffice. If I find a critical bug in either then, as an experienced C++ programmer, I still have approximately the same chance of fixing it in either: zero. MS doesn’t let me fix the MS Office bug[1] but I’ve read some of the OpenOffice code and I still have nightmares about it. For a typical user, who isn’t a C++ programmer, OpenOffice is even more an opaque blob.

                                                                                        This is a good point, and it is a massive blow against Free Software since Free Software was supposed to be about the users.

                                                                                        Even the second-source argument (which is really compelling to a lot of companies) doesn’t really hold up because modern codebases are so huge.

                                                                                        I personally think this is a separate problem, but yes, one that has to be fixed before the second-source argument applies.

                                                                                        The bc utility is a tiny part of the FreeBSD base system (if memory serves, you wrote it, so should be familiar with the codebase) and yet is more code than the whole of UNIX Release 7 (it also has about as much documentation as the entire printed manual for UNIX Release 7).

                                                                                        Sure, it’s a tiny part of the codebase, but I’m not sure bc is a good example here. bc is probably the most complicated of the POSIX tools, and it still has less lines of code than MINIX. (It’s about 10k of actual lines of code; there are a lot of comments for documentation.) You said MINIX implemented userspace; does that mean POSIX tools? If it did, I have very little faith in the robustness of those tools.

                                                                                        I don’t know if you’ve read the sources of the original Morris bc, but I have (well, its closest descendant). It was terrible code. When checking for keywords, the parser just checked for the second letter of a name and then just happily continued. And hardly any error checking at all.

                                                                                        After looking at that code, I wondered how much of original Unix was terrible in the same way, and how terrible MINIX’s userspace is as well.

                                                                                        So I don’t think holding up original Unix as an example of “this is how simple software can be” is a good idea. More complexity is needed than that; we want robust software as well.

                                                                                        In other words, I think there is a place for more complexity in software than original Unix had. However, the complexity in modern-day software is out of control. Compilers don’t need to be millions of lines of code, and if you discount drivers, neither should operating systems. But they can have a good amount of code. (I think a compiler with 100k LOC is not too bad, if you include optimizations.)

                                                                                        So we’ve gone from too much minimalism to too much complexity. I hope we can find the center between those two. How do we know when we have found it? When our software is robust. Too much minimalism removes robustness, and too much complexity does the same thing. (I should write a blog post about that, but the CMake/Make recursive performance one comes first.)

                                                                                        bc is complex because it’s robust. In fact, I always issue a challenge to people who claim that my code is bad to find a crash or a memory bug in bc. No one has ever come back with such a bug. That is the robustness I am talking about. That said, if bc were any more complex than it is (and I could still probably reduce its complexity), then it could not be as robust as it is.

                                                                                        Also, with regards to the documentation, it has that much documentation because (I think) it documents more than the Unix manual. I have documented it to ensure that the bus factor is not a thing, so the documentation for it goes down to the code level, including why I made decisions I did, algorithms I used, etc. I don’t think the Unix manual covered those things.

                                                                                        From an end-user perspective, well-documented stable interfaces with end-user programming tools give you the key advantages of Free Software. If there are two (or more) companies that implement the same stable interfaces, that’s a complete win.

                                                                                        This is a point I find myself reluctantly agreeing with, and I think it goes back to something you said earlier:

                                                                                        A lot of F/OSS projects have an elitist attitude about programming and don’t want end users to be extending the programs unless they pass the gatekeeping requirements of learning programming languages whose abstractions are far too low-level for the task at hand.

                                                                                        This, I think, is the biggest problem with FOSS. FOSS was supposed to be about user freedom, but instead, we adopted this terrible attitude and lost our way.

                                                                                        Perhaps if we discarded this attitude and made software designed for users and easy for users to use and extend, we might turn things around. But we cannot make progress with that attitude.

                                                                                        That does, of course, point to you being correct about other things, specifically, that licenses matter too much right now because if we changed that attitude, would licenses really matter? In my opinion, not to the end user, at least.

                                                                                        1. 5

                                                                                          Sure, it’s a tiny part of the codebase, but I’m not sure bc is a good example here. bc is probably the most complicated of the POSIX tools, and it still has less lines of code than MINIX. (It’s about 10k of actual lines of code; there are a lot of comments for documentation.) You said MINIX implemented userspace; does that mean POSIX tools? If it did, I have very little faith in the robustness of those tools.

                                                                                          To be clear, I’m not saying that everything should be as simple as code of this era. UNIX Release 7 and Minix 1.0 were on the order of 10-20KLoC for two related reasons:

                                                                                          • The original hardware was incredibly resource constrained, so you couldn’t fit much software in the available storage and memory.
                                                                                          • They were designed for teaching (more true for Minix, but somewhat true for early UNIX versions) and so were intentionally simple.

                                                                                          Minix did, I believe, implement POSIX.1, but so did NT4’s POSIX layer: returning ENOTIMPLEMENTED was a valid implementation and it was also valid for setlocale to support only "C" and "POSIX". Things that were missing were added in later systems because they were useful.

                                                                                          My point is that the GNU Manifesto was written at a time when it was completely feasible for someone to sit down and rewrite all of the software on their computer from scratch. Today, I don’t think I would be confident that I could rewrite awk or bc, let alone Chromium or LLVM from scratch and I don’t think I’d even be confident that I could fix a bug in one of these projects (I’ve been working on LLVM since around 2007 and I there are bugs I’ve encountered that I’ve had no idea how to fix, and LLVM is one of the most approachable large codebases that I’ve worked on).

                                                                                          So we’ve gone from too much minimalism to too much complexity. I hope we can find the center between those two. How do we know when we have found it? When our software is robust. Too much minimalism removes robustness, and too much complexity does the same thing. (I should write a blog post about that, but the CMake/Make recursive performance one comes first.)

                                                                                          I’m not convinced that we have too much complexity. There’s definitely some legacy cruft in these systems but a lot of what’s there is there because it has real value. I think there’s also a principle of conservation of complexity. Removing complexity at one layer tends to cause it to reappear at another and that can leave you with a less robust system overall.

                                                                                          Perhaps if we discarded this attitude and made software designed for users and easy for users to use and extend, we might turn things around. But we cannot make progress with that attitude.

                                                                                          I created a desktop environment project around this idea but we didn’t have sufficient interest from developers to be able to build anything compelling. F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).

                                                                                          One of the most interesting things I’ve seen in usability research was a study in the early 2000s that showed that only around 10-20% of the population thinks in terms of hierarchies for organisation. Most modern programming languages implicitly have a notion of hierarchy (nested scopes and so on) and this is not a natural mindset of the majority of humans (and the most widely used programming language, Excel, does not have this kind of abstraction). This was really obvious when iTunes came out with its tag-and-filter model: most programmers said ‘this is stupid, my music is already organised in folders in a nice hierarchy’ and everyone else said ‘yay, now I can organise my music!’. I don’t think we can really make usable software until we have programming languages that are usable by most people, so that F/OSS projects can have contributors that really reflect how everyone thinks. Sadly, I’m making this problem worse by working on a programming language that retains several notions of hierarchy. I’d love to find a way of removing them but they’re fairly intrinsic to any kind of inductive proof, which is (to date) necessary for a sound type system.

                                                                                          That does, of course, point to you being correct about other things, specifically, that licenses matter too much right now because if we changed that attitude, would licenses really matter? In my opinion, not to the end user, at least.

                                                                                          Licenses probably wouldn’t matter to end users, but they would still matter for companies. I think one of the big things that the F/OSS community misses is that 90% of people who write software don’t work for a tech company. They work for companies whose primary business is something else and they just need some in-house system that’s bespoke. Licensing matters a lot to these people because they don’t have in-house lawyers who are an expert in software licenses and so they avoid any license that they don’t understand without talking to a lawyer. These people should be the ones that F/OSS communities target aggressively because they are working on software that is not their core business and so releasing it publicly has little or no financial cost to them.

                                                                                          1. 1

                                                                                            To be clear, I’m not saying that everything should be as simple as code of this era.

                                                                                            Apologies.

                                                                                            My point is that the GNU Manifesto was written at a time when it was completely feasible for someone to sit down and rewrite all of the software on their computer from scratch.

                                                                                            Okay, that makes sense, and I agree that the situation has changed.

                                                                                            Today, I don’t think I would be confident that I could rewrite awk or bc, let alone Chromium or LLVM from scratch and I don’t think I’d even be confident that I could fix a bug in one of these projects (I’ve been working on LLVM since around 2007 and I there are bugs I’ve encountered that I’ve had no idea how to fix, and LLVM is one of the most approachable large codebases that I’ve worked on).

                                                                                            I think I can tell you that you could rewrite awk or bc. They’re not that hard, and 10k LOC is a walk in the park for someone like you. But point taken with LLVM and Chromium.

                                                                                            But then again, I think LLVM could be less complex. Chromium, could be as well, but it’s limited by the W3C standards. I could be wrong, though.

                                                                                            I think the biggest problem with most software, including LLVM, is scope creep. Even with bc, I feel the temptation to add more and more.

                                                                                            With LLVM, I do understand that there is a lot of inherent complexity, targeting multiple platforms, lots of needed canonicalization passes, lots of optimization passes, codegen, register allocation. Obviously, you know this better than I do, but I just wanted to make it clear that I understand the inherent complexity. But is it all inherent?

                                                                                            I’m not convinced that we have too much complexity. There’s definitely some legacy cruft in these systems but a lot of what’s there is there because it has real value. I think there’s also a principle of conservation of complexity. Removing complexity at one layer tends to cause it to reappear at another and that can leave you with a less robust system overall.

                                                                                            There is a lot of truth to that, but that’s why I specifically said (or meant) that maximum robustness is the target. I doubt you or anyone would say that Chromium is as robust as possible. I personally would not claim that about LLVM either. I also certainly would not claim that about Linux, FreeBSD, or even ZFS!

                                                                                            And I would not include legacy cruft in “too much complexity” unless it is past time that it is removed. For example, Linux keeping deprecated syscalls is not too much complexity, but keeping support for certain arches that have only single-digit users, none of whom will update to the latest Linux, is definitely too much complexity. (It does take a while to identify such cruft, but we also don’t spend enough effort on it.)

                                                                                            Nevertheless, I agree that trying to remove complexity where you shouldn’t will lead to it reappearing elsewhere.

                                                                                            F/OSS has a singular strength that is also a weakness: It is generally written by people who want to use the software, not by people who want to sell the software. This means that it tends to be incredibly usable to the authors but it is only usable in general if the authors are representative of the general population (and since they are, by definition, programmers, that is intrinsically not the case).

                                                                                            I agree with this, and the only thing I could think of to fix this is to create some software that I myself want to use, and to actually use it, but to make it so good that other people want to use it. Those people need support, which could lead to me “selling” the software, or services around it. Of course, as bc shows (because it does fulfill all of the requirements above, but people won’t pay for it), it should not just be anything, but something that would be critical to infrastructure.

                                                                                            One of the most interesting things I’ve seen in usability research was a study in the early 2000s that showed that only around 10-20% of the population thinks in terms of hierarchies for organisation. Most modern programming languages implicitly have a notion of hierarchy (nested scopes and so on) and this is not a natural mindset of the majority of humans (and the most widely used programming language, Excel, does not have this kind of abstraction).

                                                                                            I think I’ve seen that result, and it makes sense, but hierarchy unfortunately makes sense for programming because of the structured programming theorem.

                                                                                            That said, there is a type of programming (beyond Excel) that I think could be useful for the majority of humans is functional programming. Data goes in, gets crunched, comes out. I don’t think such transformation-oriented programming would be too hard for anyone. Bonus points if you can make it graphical (maybe like Blender’s node compositor?). Of course, it would probably end up being quite…inefficient…but once efficiency is required, they can probably get help from a programmer.

                                                                                            I don’t think we can really make usable software until we have programming languages that are usable by most people, so that F/OSS projects can have contributors that really reflect how everyone thinks.

                                                                                            I don’t think it’s possible to create programming languages that produce software that is both efficient and well-structured without hierarchy, so I don’t think, in general, we’re going to be able to have contributors (for code specifically) that are not programmers. That does make me sad. However, what we could do is have more empathy for users and stop assuming we have the same perspective as they do. We could assume that what is good for normal users might not be bad for us and actually try to give them what they need.

                                                                                            But even with that, I don’t think the result from that research is that people 80-90% of people can’t think in hierarchies, just that they do not do so naturally. I think they can learn. Whether they want to is another matter…

                                                                                            I could be wrong about both things; I’m still young and naive.

                                                                                            Licenses probably wouldn’t matter to end users, but they would still matter for companies. I think one of the big things that the F/OSS community misses is that 90% of people who write software don’t work for a tech company. They work for companies whose primary business is something else and they just need some in-house system that’s bespoke. Licensing matters a lot to these people because they don’t have in-house lawyers who are an expert in software licenses and so they avoid any license that they don’t understand without talking to a lawyer. These people should be the ones that F/OSS communities target aggressively because they are working on software that is not their core business and so releasing it publicly has little or no financial cost to them.

                                                                                            That’s a good point. How would you target those people if you were the one in charge?

                                                                                            Now that I have written a lot and taken up a lot of your time, I must apologize. Please don’t feel obligated to respond to me. But I have learned a lot in our conversations.

                                                                                        2. 1

                                                                                          They’re fine if the thing is completely self-contained but if you want to change where a layer is in the system then you can’t move functions between files without talking to lawyers.

                                                                                          Maybe I misunderstand MPL 2.0, but I think this is a non-issue: if you’re not actually changing the code (just the location), you don’t have to publish anything. If you modify the code (changing implementation), then you have to publish the changes. This is easiest done on a per file basis of course, but I think you technically only need to publish the diff.

                                                                                          This is why it’s non viral: you say, “I’ve copied function X into my code and changed the input from integer to float”. You don’t have to say anything else about how it’s used or why such changes were necessary.

                                                                                          1. 1

                                                                                            Generally, when you refactor, you don’t just move the code, you move and modify it. If you modify code from an MPL’d file that you’ve copied into another file then you need to make sure that you propagate the MPL into that file and share the changes.

                                                                                          2. 1

                                                                                            they us Fedora to push core bits of Red Hat-controlled code into the Linux ecosystem, make them dependencies for everything, and then can charge whatever the like for support because no one else understands the code.

                                                                                            How do they make their things “dependencies for everything”? It seems you left out a step where other vendors/distributions choose to adopt Red Hat projects or not.

                                                                                            1. 2

                                                                                              ISTM that quite a number of RH-backed projects are now such major parts of the infrastructure of Linux that it’s quite hard not to use them. Examples: pulseaudio, systemd, Wayland, and GNOME spring to mind.

                                                                                              All the mainstream distros are now based on these, and the alternatives that are not are increasingly niche.

                                                                                          3. 4

                                                                                            If you want “non viral copyleft”, there are options: Mozilla Public License and the CDDL which has been derived from it. While they have niches in which they’re popular it’s not like they have taken off, so I’m not sure if “companies would be willing” is the right description.

                                                                                            1. 1

                                                                                              I think you have a point, which is discouraging to say the least.

                                                                                            2. 1

                                                                                              Without the viral-nature, couldn’t you essentially white-wash the license by forking once and relicensing as MIT, then forking the MIT fork? It would take any power out of the license to enforce itself terms.

                                                                                              1. 2

                                                                                                No.

                                                                                                Virality is a separate thing from copyleft. People just think they are connected because the GPL is the first license that had both.

                                                                                                You can have a clause in the license that says that the software must be distributed under that license for the parts of the software that were originally under the license.

                                                                                                An example is a license I’ve written (https://yzena.com/yzena-copyleft-license/). It says specifically that the license only applies to the original source code, and any changes to the original source code. Anything else that is integrated (libraries, etc.) is not under the license.

                                                                                                Warning: Do NOT use that license. I have not had a lawyer check it. I will as soon as I can, but until then, it’s not a good idea to use.

                                                                                                1. 1

                                                                                                  No, because you can’t relicense someone else’s work.

                                                                                                  “Virality” is talking about how it forces other software the depends on the viral software to release under the same license.

                                                                                                  1. 1

                                                                                                    So would you have to submit the source of the individual GPL components used as part of a derivative work? I don’t think the GPL would even make sense if it didn’t effect the whole project, that’s what the LGPL is for.

                                                                                                    1. 1

                                                                                                      I think if you want to add a single GPL component you would need to release the full software under GPL. (Unless there were other licenses to allow the mixing)

                                                                                              1. 37

                                                                                                I hate to say it, but while the substance of the article is useful, it disproves the title, in my opinion.

                                                                                                The title says that Nix is close to perfect, and then lays out a lot of shortcomings that take it far away from perfect.

                                                                                                I personally wish that there was a Nix that was well-documented, not so academic and elitist, and in general, had some empathy for users, especially for new users. In fact, lacking empathy for users alone makes something not close to perfect, in my opinion.

                                                                                                Also, the soft split that is mentioned makes me nervous. Why such a split? What makes flakes better enough that some people use them, but not better enough that others don’t?

                                                                                                This all might sound very negative, and if so, I apologize. I want Nix’s ideas to take off, so I actually feel discouraged about the whole thing.

                                                                                                1. 17

                                                                                                  Unpopular opinion here. The Nix docs are weird, but they are mostly fine. I usually don’t have any issue with them. The thing that usually gets me is the holes in my knowledge about how certain packaging technologies (both distro- and language-level ones) work and taking some of the things other distros do automatically for granted.

                                                                                                  Here’s an example. You are playing in a Ubuntu-based distro, and you are writing some Python. You pip install some-dependency, import it, and everything is easy, right. Well, it felt easy because two months ago you apt install-ed a C dependency you forgot about, and that brought in a shared lib that your Python package uses. Or your pip install fetches a pre-built wheel that “just runs” (only on Ubuntu and few other distros, of course).

                                                                                                  Nix is brutally honest and makes this shit obvious. Unfortunately, dealing with it is hard. [1] Fortunately, once you deal with all that, it tends to stay dealt with and doesn’t randomly break on other people’s computers.

                                                                                                  Learning Nix has helped me learn Linux in ways I never originally suspected. It’s tons of fun (most of the time)!


                                                                                                  [1] The rude awakening that a Python library can require a Fortran compiler is always fun to watch from the side. :)

                                                                                                  1. 10

                                                                                                    The Nix docs are weird because they’re written for the wrong audience: people who want to learn Nix. I don’t care about Nix. Every moment I spend learning about Nix is just an inconvenience. Most Nix users are probably like that too. Users just want a packaging system that works, all of this discussion about Nix fundamentals is anti-documentation, things we need to skip to get to what we want: simple recipes for common tasks.

                                                                                                    But Nix also has at least two fundamental technical issues and one practical issue that exacerbate the doc situation. The practical issue has to do with the name: it’s just a search disaster that Nix is three things (a distro, a language, and a package manager). On to the technical issues.

                                                                                                    1. I can’t explore Nix because of the choice of a lazy language. Forcing values by printing them with builtins.trace is a minefield in Nix. Sometimes printing an object will result in it trying to create thousands of .drv files. Other times you find yourself printing one of the many circular objects that Nix uses.

                                                                                                      In Haskell and C++ I get to look at types to figure out what kind of object I’ve got, in addition to the docs. In Scheme and Python I get to print values to explore any object. In Nix? I can do neither. I don’t get types and I don’t get to print objects at runtime easily. At least you can print .drv files to figure out that say, a package happens to have a lib output, and that’s what you need to depend on instead of the default out output.

                                                                                                    2. There are almost no well-defined APIs within the Nix world.

                                                                                                      Aside from derivations, it’s all ad-hoc. Different parts of the Nix ecosystem work completely differently to accomplish the same goals. So learning how to do something to C packages, doesn’t help you when you’re dealing with Python packages, and doesn’t help you when you’re dealing with Haskell packages (where there are two completely different ecosystems that are very easy for novices to confuse). Flakes add a bit of structure, but they’ve been unstable for 3 years now with no stability on the horizon.

                                                                                                    1. 2

                                                                                                      I agree on both technical issues. Static types and type signatures in Nix would be especially amazing. I spend so much time wondering “what type does this have” when looking at nixpkgs code. :(

                                                                                                      As for the fundamentals and anti-documentation features, I am not so sure. I think Nix is such a fundamentally different way of doing things, that you need to start somewhere. For example, I can’t give users a packaging script sprinkled with content SHA’s without explaining what those are and why we need them in the first place (It’s especially baffling when they sit right next to git commit SHA’s). The Nix pills guide has a good way of introducing the important concepts and maybe it can be shortened, so that people can go through most of the stuff they need in half an hour. I don’t know…

                                                                                                  2. 10

                                                                                                    not so academic and elitist

                                                                                                    For a language that is the bastard child of Bash and ML, I would not consider it “academic”. The ugliness of the language is due in no small part to the affordances for real-world work.

                                                                                                    As far as elitism…well, it’s a hard tool to use. It’s getting easier. It’s strange to me to expect that such powerful magic shouldn’t take some work to learn (if not master).

                                                                                                    1. 19

                                                                                                      For a language that is the bastard child of Bash and ML, I would not consider it “academic”. The ugliness of the language is due in no small part to the affordances for real-world work.

                                                                                                      I never said the language was academic. I don’t think it is. In fact, it’s less that the language is academic and more that the documentation and culture are.

                                                                                                      As far as elitism…well, it’s a hard tool to use. It’s getting easier. It’s strange to me to expect that such powerful magic shouldn’t take some work to learn (if not master).

                                                                                                      Power does not imply something must be hard to learn. That is a common misconception, but it’s not true.

                                                                                                      As an example, consider Python. It’s not that hard to learn, yet it is enormously powerful. Or Ruby.

                                                                                                      In fact, Ruby is a great example because even though it’s powerful, it is mostly approachable because of the care taken in helping it to be approachable, epitomized by _why’s Poignant Guide.

                                                                                                      _why’s efforts worked because he made Ruby approachable with humor and by tying concepts of Ruby to what people already knew, even if he had to carefully lay out subtle differences. Those techniques, applied with care, would work for Nix too.

                                                                                                      So the problem with Nix is that they use the power as an excuse to not put more effort into making it approachable, like _why did for Ruby. This, I think, is a side effect of the culture I mentioned above.

                                                                                                      If someone wrote a Nix equivalent of _why’s Poignant Guide, playing to their own strengths as writers and not just trying to copy _why, I think Nix would have a massive uptake not long after.

                                                                                                      In fact, please do write that guide, if you would like to.

                                                                                                      1. 12

                                                                                                        If I had more spoons I’d definitely do that

                                                                                                    2. 9

                                                                                                      I agree. It is some of the coolest Linux technology that is out there, but it is so hard to use. Both because its poor documentation and because of how different it is. When I used it, it felt like once a week I would try to do something not allowed/possible and then would have to go multiple pages deep on a thread somewhere to find multiple competing tools that claim to solve that problem best. I think I will try Nix the language on the next personal project I make that involves more than one language, but I haven’t had a chance to do that in a while.

                                                                                                      I would love to try NixOS again sometime. Hopefully they will come out with better documentation and/or a “let me cheat just this once so I can continue working” feature.

                                                                                                      Edit: I forgot to say, great article though! I enjoyed your perspective.

                                                                                                      1. 6

                                                                                                        I have found the Guix documentation quite good, and the community very welcoming, for what it’s worth.

                                                                                                        1. 6

                                                                                                          Keep in mind that the more familiar someone is with the subject the more issues they can talk about. I could go on for ages about problems with python, even though it is a perfect language for most of my use cases - It’s not a contradiction.

                                                                                                          The post just concentrated on the negatives rather than positives - and there are some really cool things about nix. Especially if you have use cases where everything else seems to be worse (looking at you chef/puppet/ansible).

                                                                                                          1. 1

                                                                                                            I wouldn’t feel discouraged if I were you. Nix’s community is nothing but growing. Most of these issues are warts, not dealbreakers.

                                                                                                            1. 1

                                                                                                              makes me miss WORLDofPEACE. Didn’t know them, but they’re a good example of someone that can make anyone feel welcome IMO.

                                                                                                            1. 4

                                                                                                              I’m going to post my Hacker News comment about this article here.

                                                                                                              This post has good ideas, but there are a few things wrong with this.

                                                                                                              First, we forget that filesystems are not hierarchies, they are graphs, whether DAG’s or not. 1

                                                                                                              Second, and this follows from the first, both tags and hierarchy are possible with filesystems as they currently are.

                                                                                                              Here’s how you do it:

                                                                                                              1. Organize your files in the hierarchy you want them in.
                                                                                                              2. Create a directory in a well-known place called tags/ or whatever you want.
                                                                                                              3. For every tag <name>, create a directory tags/<name>/
                                                                                                              4. Hard-link all files you want to tag under each tag directory that apply.
                                                                                                              5. For extra credit, create a soft link pointing to the same file, but with a well-known name.

                                                                                                              This allows you to use the standard filesystem tools to get all files under a specific tag. For example,

                                                                                                              find tags/<name> -type f
                                                                                                              

                                                                                                              (The find on my machine does not follow symbolic links and does not print them if you use the above command.) If you want to find where the file is actually under the hierarchy, use

                                                                                                              find -L tags/ -xtype l
                                                                                                              

                                                                                                              Having both hard and soft links means that 1) you cannot lose the actual file if it’s moved in the hierarchy (the hard link will always refer to it), and 2) you can either find the file in the hierarchy from the tag or you know that the file has been moved in the hierarchy.

                                                                                                              Also, if you want to find files under multiple tags, I found that the following command works:

                                                                                                              find -L tags/tag1 tags/tag2 -xtype l | xargs readlink -f | sort | uniq -d
                                                                                                              

                                                                                                              I have not figured out how to find files under more than one tag without following the links, but it could probably be done by taking the link name and prepending where the link points to plus a space, then sorting on where the link points.

                                                                                                              Of course, I’m no filesystem expert, so I probably got a few things wrong. I welcome smarter people to tell me how I am wrong.

                                                                                                              1. 6

                                                                                                                The hard/soft link scheme has some problems. The way most application save files breaks hard links, because for safety you have to write to a new file and then rename the new file replacing the old. A symlink will survive that, but if you both save and move/rename a file in between checking the tag, your links are both broken.

                                                                                                                In a way you’re trying to reinvent the file alias, which has existed on macOS since 1991. An alias is like a symlink but also contains the original’s fileID (like a hard link), and if the file’s on a remote volume it has metadata allowing the filesystem to be remounted. macOS got around the safe-save problem with an FSExchangeFiles system call that preserves the original file’s fileID during the rename.

                                                                                                                At a higher level, though, I think your argument is similar to saying “you can already do X in language Y, because Y is Turing-complete.” Which is true, but irrelevant if doing X is too awkward or slow, or incompatible with the way everyone uses Y. Apple’s Spotlight metadata/search system represents this approach applied to a normal filesystem, but it’s still pretty limited.

                                                                                                                As an example of how things could be really, fundamentally different, my favorite mind-opening example is NewtonOS’s “soup”.

                                                                                                                1. 5

                                                                                                                  It’s worth noting that this is more or less what BFS did. It provided four high-level features:

                                                                                                                  • Storage entities that contained key-value pairs.
                                                                                                                  • Storage for small values.
                                                                                                                  • Storage for large values.
                                                                                                                  • Maps from queries to storage entitites.

                                                                                                                  Every file is an entity and the ‘contents’ is typically either a large or small value (depending on the contents of the file) with a well-known key. HFS-style forks / NTFS alternative data streams could be implemented as other key-value pairs. Arbitrary metadata could also be stored with any file (the BeOS Tracker had some things to grab ID3 tags from MP3s and store them in metadata, for example).

                                                                                                                  BeOS provided a search function that would crawl metadata and generate a set of files that matched a specific query. This could be stored in BFS and any update to the metadata of any file could update the query. Directories were just a special case of this: they were saved queries of a key-pair identifying a parent-child relationship.

                                                                                                                  The problem is not that filesystems can’t represent these structures it’s that:

                                                                                                                  • Filesystems other than BFS don’t have a consistent way of representing them (doubly true for networked filesystems) and,
                                                                                                                  • UIs don’t expose this kind of abstraction at the system level, so if it exists it’s inconsistent from one application to another.
                                                                                                                  1. 2

                                                                                                                    (I think the correct spelling is “BeFS”.) BeFS designer Dominic Giampaolo went on to Apple and applied a lot of these concepts in Spotlight. It’s not as deeply wired into the filesystem itself, but provides a lot of the same functionality.

                                                                                                                    1. 6

                                                                                                                      I think the correct spelling is “BeFS”

                                                                                                                      I take Dominic’s book describing the FS as the canonical source for the name, and it uses BFS, though I personally prefer BeFS.

                                                                                                                      BeFS designer Dominic Giampaolo went on to Apple and applied a lot of these concepts in Spotlight. It’s not as deeply wired into the filesystem itself, but provides a lot of the same functionality.

                                                                                                                      Spotlight is very nice in a lot of ways, but it is far less ambitious. In particular, Spotlight had the design requirement that it should work with SMB2 shares with the same abstractions. Because Spotlight maintains the indexes in userspace, it is possible to get out of sync (and is actually quite easy, which then makes the machine really slow for a bit as Spotlight goes and tries to reindex everything, and things like Mail.app search just don’t work until it’s finished). Spotlight also relies on plugins to parse files, rather than providing structured metadata storage, which means that the same file on two different machines may appear differently in searches (saved or otherwise). For example, if you put a Word document on an external disk and then search for something in a keyword in the metadata, it will be found. If you then plug this disk into a machine that doesn’t have Word installed, it won’t be. In contrast, with the BFS model Word would have been responsible for storing the metadata and then it would have been preserved everywhere.

                                                                                                                  2. 2

                                                                                                                    I like your idea. It made me realize that one little system of my own is, in fact, tagging: I have a folder called to-read that contains symlinks to documents I’ve saved.

                                                                                                                    Tangentially: I want rich Save dialogs. Current ones only let you save a file. I would love it if I could

                                                                                                                    • Save a file
                                                                                                                    • Set custom file attributes like ‘downloaded-from’ or ‘see-also’ or ‘note-to-self’
                                                                                                                    • Create symlinks or hardlinks in other directories
                                                                                                                    • Check or select programs/scripts to run on the newly-created file?
                                                                                                                    • All in one dialog
                                                                                                                    1. 2

                                                                                                                      Then somebody will edit their file with an editor that uses atomic rename, and your hard links will all be busted.

                                                                                                                      1. 2

                                                                                                                        TBH This sounds like a fragile system that only addresses the very shallow benefits of a more db-like object store.