Threads for dvogel

  1. 29

    I think size of the program, and the team maintaining it, is an important factor in the static vs dynamic discussion.

    I’m in the “I don’t make type errors, and if I do, I can shake them out with a few tests” camp for as long as I can comprehend what’s going on in the codebase.

    But when the code grows to the point I can no longer remember where everything is, e.g. I can’t find all callers of a function, dynamism starts to become a burden. When I need to change a field of a struct, I have to find all uses of the field, because every missed one is a bug that’s going to show up later. At some point that becomes hard to grep for, and even impossible to account for in reflection-like code. It can degrade to a bug whack-a-mole, promote more and more defensive programming patterns, and eventually fear of change.

    I’ve had good experiences with gradually-typed languages. They stretch this “size limit” of a program a lot, while still allowing use of duck typing where it helps, without bringing complexity of generic type systems.

    1. 10

      “Dynamic typing falls apart with large team/large codebase” is one of those cliché arguments that doesn’t really contribute usefully, though.

      Also your presentation of it has multiple issues:

      • Large team/large codebase projects fail all the time regardless of typing discipline. Static typing doesn’t appear to have a better track record of success there.
      • Tooling for dynamically-typed languages has come a long way in the decades since this argument was first raised. You can just get an IDE and tell it to track down and rename references for you. And if your complaint is that it’s harder/impossible to do through “reflection-like code”, well, people can write metaprogram-y reflection stuff in statically-typed languages too.
      • Ultimately, if your codebase has lots of functions or methods that are called from huge numbers of disparate places, to such a degree that you can’t safely work with it without an IDE doing full static analysis to track them all for you, that’s a code smell in any language, in any typing discipline.
      1. 17

        Static languages can verify all metaprogramming is type correct. IDE heuristics can not. In Rust you can write a macro and the compiler will expand and type check it. That kind of stuff is impossible in dynamic languages.

        1. 8

          Static languages can verify all metaprogramming is type correct.

          This is probably going to get off-topic into arguing about the exact definition of “statically-typed”, but: I think that if you venture outside of languages like Rust (which seem to deliberately limit metaprogramming features precisely to be able to provide guarantees about the subset they expose), you’lll find that several languages’ guarantees about ahead-of-time correctness checks start being relaxed when using metaprogramming, runtime code loading, and other “dynamic-style” features. Java, for example, cannot actually make guarantees as strong as you seem to want, and for this among other reasons the JVM itself is sometimes referred to as the world’s most advanced dynamically-typed language runtime.

          There also are plenty of things that seem simple but that you basically can’t do correctly in statically-typed languages without completely giving up on the type system. Truly generic JSON parsers, for example. Sure, you can parse JSON in a statically-typed language, but you either have to tightly couple your program to the specific structures you’ve planned in advance to handle (and throw runtime errors if you receive anything else), or parse into values of such ultra-generic “JSON object” types that the compiler and type system no longer are any help to you, and you’re effectively writing dynamically-typed code.

          1.  

            Dynlangs are definitely better for data that isn’t structured as well.

            C#’s dynamic keyword feels like a perfect fit for this situation without having to give up static typing everywhere else. Hejlsberg is ahead of the curve, per usual.

            1.  

              for this among other reasons the JVM itself is sometimes referred to as the world’s most advanced dynamically-typed language runtime

              Aren’t runtimes always “dynamically typed”? What does it mean for a runtime to be “statically typed”?

              or parse into values of such ultra-generic “JSON object” types that the compiler and type system no longer are any help to you, and you’re effectively writing dynamically-typed code.

              It sounds like you’re arguing that the worst case for static type systems is equivalent to the best case for dynamic type systems, which doesn’t seem like a ringing endorsement for dynamic type systems. That said, I don’t even think this is true for this JSON-parsing example, because you could conceive of a generic JSON parser that has different unmarshaling strategies (strict, permissive, etc). Further, as static type systems are adopted more widely, this sort of poorly-structured data becomes rarer.

              1. 5

                Aren’t runtimes always “dynamically typed”?

                Some more so than others. Rust, for all its complexity as a language, is mostly shoving that complexity onto the compiler in hopes of keeping the runtime relatively simple and fast, because the runtime doesn’t have to do quite as much work when it trusts that there are classes of things the compiler simply prevents in advance (the runtime still does some work, of course, just not as much, which is the point).

                But a language like Java, with runtime code loading and code injection, runtime reflection and introspection, runtime creation of a wide variety of things, etc. etc. does not get to trust the compiler as much and has to spend some runtime cycles on type-checking to ensure no rules are being broken (and it’s not terribly hard to deliberately write Java programs that will crash with runtime type errors, if you want to).

                That said, I don’t even think this is true for this JSON-parsing example, because you could conceive of a generic JSON parser that has different unmarshaling strategies (strict, permissive, etc).

                If you want truly generic parsing, you’re stuck doing things that the compiler can’t really help you with. I’ve seen even people who are quite adept at Haskell give up and effectively build a little subset of the program where everything is of a single JSON type, which is close enough to being dynamically typed as makes no difference.

                Further, as static type systems are adopted more widely, this sort of poorly-structured data becomes rarer.

                My experience of having done backend web development across multiple decades is that poorly-structured data isn’t going away anytime soon, and any strategy which relies on wishing poorly-structured data out of existence is going to fail.

                1.  

                  Aren’t runtimes always “dynamically typed”?

                  If you eschew these needlessly binary categories of static vs dynamic and see everything on a scale of dynamism then I think you’ll agree that runtimes are scattered across that spectrum. Many even shift around on that spectrum over time. For example, if you look at the history of JSR 292 for adding invokedynamic to the JVM you’ll find a lot of cases where the JVM used to be a lot less dynamically typed than it is today.

                2. 2

                  There’s no reason you can’t parse a set of known JSON fields into static members and throw the rest into an ultra-generic JSON object.

                  1.  

                    Those are the options I said are available, yes.

                    1.  

                      I mean, you can do both at once for the same value, getting the benefits of both.

              2. 7

                Fail is too harsh. Unless you’re writing some rocket navigation system, a project is not going to outright fail because of software defects. Run-time type errors merely add to other bugs that you will need to fix, and I argue that bugs caused by runtime type errors are less of a problem in small programs.

                I don’t know of any robust tooling for refactoring large JavaScript projects. Of course most languages have some type-system escape hatches, but I expect languages like JS to use hard-to-analyze type-erasing constructs much more often.

                I disagree that having callers beyond your comprehension is automatically a code smell. It’s a natural state of things for libraries, for example. Ideally libraries should have a stable API and never change it, but it’s not always that easy, especially for internal libraries and reusable core pieces of large projects that may need to evolve with the project.

                It’s not just about IDEs. Compilation will also track down all type errors for you, regardless of where and when these errors happen. When working with teams, it may be someone else working on some other component. In this case the types are a way to communicate and coordinate with others.

                You can make a mess in any language, but how easy is to make a mess varies between languages. Languages that prevent more errors will resist the mess for longer.

                1. 2

                  I expect languages like JS to use hard-to-analyze type-erasing constructs much more often.

                  Why do you expect this?

                  I disagree that having callers beyond your comprehension is automatically a code smell.

                  Even if it’s an internal library, why don’t other internal codebases have a single clear integration point with it? And why does everything else need to have lots of knowledge of the library’s structure? This definitely is a code smell to me – the Law of Demeter, at least, is being violated somewhere, and probably other design principles too.

                  Languages that prevent more errors will resist the mess for longer.

                  This is veering off into another clichéd and well-trod argument (“static typing catches/prevents more bugs”). I’ll just point out that while proponents of static typing often seem to take it as a self-evident truth, actually demonstrating its truth empirically has turned out to be, at the very least, extremely difficult. Which is to say: nobody’s managed it, despite it being such an “obvious” fact, and everybody who’s tried has run into methodological problems, or failed to prove any sort of meaningful effect size, or both.

                  1.  

                    Why do you expect this?

                    Because the flexibility is a benefit of dynamic languages. If you try to write code as-if it was strongly statically typed, you’re missing out on the convenience of writing these things “informally”, and you’re not getting compiler help to consistently stick to the rigid form.

                    why don’t other internal codebases have a single clear integration point with it?

                    The comprehension problems I’m talking about that appear in large programs also have a curse of being hard to explain succinctly in a comment like this. This is very context-dependent, and for every small example it’s easy to say the problem is obvious, and a fix is easy. But in larger programs these problems are harder to spot, and changes required may be bigger. Maybe the code is a mess, maybe the tech debt was justified or maybe not. Maybe there are backwards-compat constraints, interoperability with something that you can’t change, legacy codebase nobody has time to refactor. Maybe a domain-specific problem that really needs to be handled in lots of places. Maybe code is weirdly-shaped for performance reasons.

                    The closest analogy I can think of is “Where’s Waldo?” game. If I show you a small Waldo picture, you’ll say the game is super easy, and obviously he’s right here. But the same problem in a large poster format is hard.

                    1.  

                      Because the flexibility is a benefit of dynamic languages. If you try to write code as-if it was strongly statically typed, you’re missing out on the convenience of writing these things “informally”, and you’re not getting compiler help to consistently stick to the rigid form.

                      You are once again assuming that statically-typed languages catch/prevent more errors, which I’ve already pointed out is a perilous assumption that nobody’s actually managed to prove rigorously (and not for lack of trying).

                      Also, the explanation you give still doesn’t really make sense. Go look at some typical Python code, for example – Python’s metaprogramming features are rarely used and their use tends to be discouraged, and easily >99% of all real-world Python code is just straightforward with no fancy dynamic tricks. People don’t choose dynamic typing because they intend to do those dynamic tricks all the time. They choose dynamic typing (in part) because having that tool in the toolbox, for the cases when you need it or it’s the quickest/most straightforward way to accomplish a task, is incredibly useful.

                      The comprehension problems I’m talking about that appear in large programs also have a curse of being hard to explain succinctly in a comment like this

                      Please assume that I’ve worked on large codebases maintained by many programmers, because I have.

                      And I’ve seen how they tend to grow into balls of spaghetti with strands of coupling running everywhere. Static typing certainly doesn’t prevent that, and I stand by my assertion that it’s a code smell when something is being called from so many disparate places that you struggle to keep track of them, because it is a code smell. And there are plenty of patterns for preventing it, none of which have to do with typing discipline, and which are well-known and well-understood (most commonly, wrapping an internal interface around a library and requiring all other consumers in the codebase to go through the wrapper, so that the consuming codebase controls the interface it sees and has onlyu a single point to update if the library changes).

                      1.  

                        I’ve worked on large codebases maintained by many programmers, because I have. And I’ve seen how they tend to grow into balls of spaghetti with strands of coupling running everywhere. Static typing certainly doesn’t prevent that . . .

                        No, definitely not, agreed. But static typing definitely improves many/most dimensions of project maintainability, compared to dynamic typing. This isn’t really a controversial claim! Static typing simply moves a class of assertions out of the domain of unit tests and into the domain of the compiler. The question is only if the cost of those changes is greater or lesser than the benefits they provide. There’s an argument to be made for projects maintained by individuals, or projects with lifetimes of O(weeks) to O(months). But once you get to code that’s maintained by more than 1 person, over timespans of months or longer? The cost/benefit calculus just doesn’t leave any room for debate.

                        1.  

                          But static typing definitely improves many/most dimensions of project maintainability, compared to dynamic typing. This isn’t really a controversial claim!

                          On the contrary, it’s a very controversial claim.

                          Proponents of static typing like to just assert things like this without proof. But proof you must have, and thus far nobody has managed it – every attempt at a rigorous study to show the “obvious” benefits of static typing has failed. Typically, the ones that find the effect they wanted have methodological issues which invalidate their results, and the ones that have better methodology fail to find a significant effect.

                          The cost/benefit calculus just doesn’t leave any room for debate.

                          Again: prove it. WIth more than anecdata, because we both have anecdotes and that won’t settle anything.

                      2.  

                        Because the flexibility is a benefit of dynamic languages. If you try to write code as-if it was strongly statically typed, you’re missing out on the convenience of writing these things “informally”, and you’re not getting compiler help to consistently stick to the rigid form.

                        I see most typing errors as self-inflicted wounds at this point. Don’t have time or patience for things that can be prevented by the compiler happening at runtime.

                        Dynlangs + webdev together is my kryptonite. If I had to do that all day I’d probably start looking for a new career. Just can’t deal with it.

                  2.  

                    Large team/large codebase projects fail all the time regardless of typing discipline. Static typing doesn’t appear to have a better track record of success there.

                    Yes, projects can fail for lots of reasons; no one is claiming that static typing will make a shitty idea commercially successful, for example :) But I do think static types help a lot within their narrow scope–keeping code maintainable, reducing bugs, preserving development velocity, etc. Of course, there’s no good empirical research on this, so we’re just going off of our collective experiences. 🤷‍♂️

                    1.  

                      Large team/large codebase projects fail all the time regardless of typing discipline. Static typing doesn’t appear to have a better track record of success there.

                      I think it pretty much does, actually. Static typing moves an enormous class of invariants from opt-in runtime checks to mandatory compile-time checks. Statically typed languages in effect define and enforce a set of assertions that can be approximated by dynamically typed languages but never totally and equivalently guaranteed. There is a cost associated with this benefit, for sure, but that cost is basically line noise the moment your project spans more than a single developer, or extends beyond a non-trivial period of time.

                      1.  

                        I think it pretty much does, actually.

                        As I said to your other comment along these lines: prove it. The literature is infamously full of people absolutely failing to find effects from static typing that would justify the kinds of claims you’re making.

                    2. 13

                      I always laugh when I see ruby code where the start of the method is a bunch of “raise unless foo.is_a? String”. The poor mans type checking all over the place really highlights how unsuitable these dynamic languages are for real world use.

                      1. 7

                        To be fair, any use of is_a? in ruby is a code smell

                        1. 12

                          Sure, it’s also a pattern I have seen in every Ruby codebase I have ever worked with because the desire to know what types you are actually working with is somewhat important for code that works correctly.

                          1. 5

                            Yeah, the need for ruby devs is much larger than the supply of good ones or even ones good enough to train the others. I’ve seen whole large ruby codebases obviously written by Java and C++ devs who never got ruby mentoring. I expect this is an industry wide problem in many stacks

                        2. 5

                          You seem to just be trolling, but I’ll play along, I guess.

                          I’ve seen a bit of Ruby, and a lot of Python and JavaScript, and I’ve never seen this except for code written by people who were coming from statically-typed languages and thought that was how everyone does dynamic typing. They usually get straightened out pretty quickly.

                          Can you point to some examples of popular Ruby codebases which are written this way? Or any verifiable evidence for your claim that dynamic languages are “unsuitable… for real world use”?

                          1. 5

                            I’m not trolling at all. I’ve been a Rails dev for the last 7 years and seen the same thing at every company. I don’t work on any open source code so I can’t point you at anything.

                            I quite like Rails but I’m of the opinion that the lack of static type checking is a serious deficiency. Updating Rails itself is an absolute nightmare task where even the official upgrade guide admits the only way to proceed is to have unit tests on every single part of the codebase because there is no way you can properly verify you have seen everything that needs to change. I’ve spent a large chunk of time spanning this whole year working towards updating from Rails 5.1 to 5.2. No one else dared attempt it before I joined because it’s so extremely risky.

                            I love a lot of things about Rails and the everything included design but I don’t see a single benefit to lacking types. Personally I see TypeScript as taking over this space once the frameworks become a little more mature.

                            1. 3

                              You made a very specific assertion about how people write Ruby (lots of manual type-checking assertions). You should be able to back up that assertion with pointers to the public repositories of popular projects written in that style.

                              1. 7

                                I remembered hearing from my then-partner that Rails itself uses a lot of is_a?, and that seems true.

                                 if status.is_a?(Hash)
                                        raise ArgumentError, etc...
                                
                                1.  

                                  This is pretty misleading – a quick glance at some of the examples seems like many of them aren’t really checking argument types, and when they are, they’re often cases where a method accepts any of multiple types, and there’s branching logic to handle the different options.

                                  Which is something you’d also see in a statically-typed language with sum types.

                                  The proposition that this is a common idiom used solely as a replacement for static checking is thus stil unproved.

                                  1.  

                                    Well yeah, and then there’s those that raise errors, or return some failure-signaling value.

                                    I don’t know what projects to look at since I don’t use Ruby, but I found some more in ruby/ruby.

                                2. 6

                                  ill concur with GP: this is a fairly common pattern to see in ruby codebases.

                                  however, to be fair, it’sa pattern most often introduced after attending a talk by a static typing weenie…

                            2.  

                              Do you also laugh when you see “assert(x > 0);” in typed languages?

                              1. 6

                                I would, but it would be a sad laugh because I’m using a type system that can’t express a non-zero integer.

                                1.  

                                  I would love to see broader adaptation of refinement types that let you statically guarantee properties like integer values being bound between specific values.

                              2.  

                                I’m in the Type everything if it’s even kinda big camp now. There are too many things I need to think about during the day to remember the state and usage of every variable of every program I’ve ever written, used or inspected. Typings are rails for my logic. Typings are documentation. Types help my IDE help me. I will take every single shortcut I can when the timespan I or anyone else could be interacting with the code is longer than 10 minutes.

                                Retracing steps is just so tedious and frustrating when you had it all in your head before. It just sucks. I just wanna build stuff, not fill my head with crap my computer can do.

                                /rant

                                1.  

                                  I’m in the “I don’t make type errors, and if I do, I can shake them out with a few tests” camp for as long as I can comprehend what’s going on in the codebase.

                                  This is generally true for me, but writing tests or debugging stack traces makes for a slow iteration loop. A type error from a compiler usually contains better, more direct information so resolving these type errors is a lot faster. To the extent that I (a 15 year Pythonista) eventually began to prototype in Go.

                                  That said, the biggest advantage for me for a static type checker is that it penalizes a lot of the crappy dynamic code (even the stuff that is technically correct but impossible to maintain/extend over time). A static type system serves as “rails” for less scrupulous team members. Of course, a lot of these less-scrupulous developers perceive this friction as a problem with static type systems rather than a problem with the way they hacked together their code, but I think Mypy and TypeScript have persuaded many of these developers over time to the extent that static types are much less controversial in most dynamic language communities.

                                  Another big advantage is that your type documentation is always correct and precise (whereas docs in a dynamically typed language often go stale or simply describe something as “a file-like object” [does that mean it just has a read() method, or does it also need write(), close(), seek(), truncate(), etc?]). Further, because the type docs are precise, you can have thinks like https://pkg.go.dev complete with links to related types, even if those types are declared in another package, and you get all of this for free.

                                1. 1

                                  Unless Twitter requires manual interventions to run (imagine some guys turning cranks all day long :)) , why exactly would it go down ?

                                  1. 15

                                    Eventually, they will have an incident and no one remaining on staff will know how to remediate it, so it will last for a long time until they figure it out. Hopefully it won’t last as long as Atlassian’s outage!

                                    1. 14

                                      Or everyone remaining on staff will know how to fix it but they will simply get behind the pace. 12 hour days are not sustainable and eventually people will be ill more often and make poorer decisions due to fatigue. This post described the automation as clearing the way to spend most their time on improvements, cost-savings, etc. If you only spent 26% of your time putting out fires and then lost 75% of your staff well now you’re 1% underwater indefinitely (which completely ignores the mismatch between when people work best and when incidents occur).

                                      1. 6

                                        Even worse - things that would raise warnings and get addressed before they’re problems may not get addressed in time if the staffing cuts were too deep.

                                      2. 8

                                        That’s how all distributed systems work – you need people turning cranks all day long :) It gets automated over time, as the blog post describes, but it’s still there.

                                        That was my experience at Google. I haven’t read this book but I think it describes a lot of that: https://sre.google/sre-book/table-of-contents/

                                        That is, if such work didn’t exist, then Google wouldn’t have invented the job title “SRE” some time around 2003. Obviously people were doing similar work before Google existed, but that’s the term that Twitter and other companies now use (in the title of this blog post).

                                        (Fun fact: while I was there, SREs started to be compensated as much or more than Software Engineers. That makes sense to me given the expertise/skills involved, but it was cultural change. Although I think it shifted again once they split SRE into 2 kinds of roles – SRE-SWE and SRE-SysAdmin.)


                                        It would be great if we had strong abstractions that reduce the amount of manual work, but we don’t. We have ad hoc automation (which isn’t all bad).

                                        Actually Twitter/Google are better than most web sites. For example, my bank’s web site seems to go down on Saturday nights now and then. I think they are doing database work then, or maybe hardware upgrades.

                                        If there was nobody to do that maintenance, then eventually the site would go down permanently. User growth, hardware failures (common at scale), newly discovered security issues, and auth for external services (SSL certs) are some reasons for “entropy”. (Code changes are the biggest one, but let’s assume here that they froze the code, which isn’t quite true.)


                                        That’s not to say I don’t think Twitter/Google can’t run with a small fraction of the employees they have. There is for sure a lot of bloat in code and processes.

                                        However I will also note that SREs/operations became the most numerous type of employee at Google. I think there were something like 20K-40K employees under Hoezle/Treynor when I left 6+ years ago, could easily be double that now. They outnumbered software engineers. I think that points to a big problem with the way we build distributed systems, but that’s a different discussion.

                                        1. 7

                                          Yeah, ngl but the blog post rubbed me the wrong way. That tasks are running is step 1 of the operarional ladder. Tasks running and spreading is step 2. But after that, there is so much work for SRE to do. Trivial example: there’s a zero day that your security team says is being actively exploited right now. Who is the person who knows how to get that patched? How many repos does it affect? Who knows how to override all deployment checks for all the production services that are being hit and push immediately? This isn’t hypothetical, there are plenty of state sponsored actors who would love to do this.

                                          I rather hope the author is a junior SRE.

                                          1. 3

                                            I thought it was a fine blog post – I don’t recall that he claimed any particular expertise, just saying what he did on the cache team

                                            Obviously there are other facets to keeping Twitter up

                                          2. 4

                                            For example, my bank’s web site seems to go down on Saturday nights now and then. I think they are doing database work then, or maybe hardware upgrades.

                                            IIUC, banks do periodic batch jobs to synchronize their ledgers with other banks. See https://en.wikipedia.org/wiki/Automated_clearing_house.

                                            1. 3

                                              I think it’s an engineering decision. Do you have people to throw at the gears? Then you can use the system with better outcomes that needs humans to occasionally jump in. Do you lack people? Then you’re going to need simpler systems that rarely need a human, and you won’t always get the best possible outcomes that way.

                                              1. 2

                                                This is sort of a tangent, but part of my complaint is actually around personal enjoyment … I just want to build things and have them be up reliably. I don’t want to beg people to maintain them for me

                                                As mentioned, SREs were always in demand (and I’m sure still are), and it was political to get those resources

                                                There are A LOT of things that can be simplified by not having production gatekeepers, especially for smaller services

                                                Basically I’d want something like App Engine / Heroku, but more flexible, but that didn’t exist at Google. (It’s hard problem, beyond the state of the art at the time.)

                                                At Twitter/Google scale you’re always going to need SREs, but I’d claim that you don’t need 20K or 40K of them!

                                                1. 1

                                                  My personal infrastructure and approach around software is exactly this. I want, and have, some nice things. The ones I need to maintain the least are immutable – if they break I reboot or relaunch (and sometimes that’s automated) and we’re back in business.

                                                  I need to know basically what my infrastructure looks like. Most companies, if they don’t have engineers available, COULD have infrastructure that doesn’t require you to cast humans upon the gears of progress.

                                                  But in e.g. Google’s case, their engineering constraints include “We’ll always have as many bright people to throw on the gears as we want.”

                                                  1. 1

                                                    Basically I’d want something like App Engine / Heroku, but more flexible, but that didn’t exist at Google.

                                                    I think about this a lot. We run on EC2 at $work, but I often daydream about running on Heroku. Yes it’s far more constrained but that has benefits too - if we ran on Heroku we’d get autoscaling (our current project), a great deploy pipeline with fast reversion capabilities (also a recentish project), and all sorts of other stuff “for free”. Plus Heroku would help us with application-level stuff, like where we get our Python interpreter from and managing it’s security updates. On EC2, and really any AWS service, we have to build all this ourselves. Yes AWS gives us the managed services to do it with but fundamentally we’re still the ones wiring it up. I suspect there’s an inherent tradeoff between this level of optimization and the flexibility you seek.

                                                    Heroku is Ruby on Rails for infrastructure. Highly opinionated; convention over configuration over code.

                                                    At Twitter/Google scale you’re always going to need SREs, but I’d claim that you don’t need 20K or 40K of them!

                                                    Part of what I’m describing above is basically about economies of scale working better because more stuff is the same. I thought things like Borg and gRPC load balancing were supposed to help with this at Google though?

                                              2. 2
                                                1. Random failures that aren’t addressed
                                                2. Code and config changes (which are still happening, to some extent)

                                                It can coast for a long time! But eventually it will run into a rock because no one is there to course-correct. Or bills stop getting paid…

                                                1. 1

                                                  I don’t have a citation for this but the vast majority of outages I’ve personally had to deal with fit into two bins as far as root causes go:

                                                  • resource exhaustion (full disks, memory leak slowly eating all the RAM, etc)
                                                  • human-caused (eg a buggy deployment)

                                                  Because of the mass firing and exodus, as well as the alleged code freeze, the second category of downtime has likely been mostly eliminated in the short term and the system is probably mostly more stable than usual. Temporarily, of course, because of all of the implicit knowledge that walked out the doors recently. Once new code is being deployed by a small subset of people who know the quirks, I’d expect things to get rough for a while.

                                                  1. 2

                                                    You’re assuming that fewer people means fewer mistakes.

                                                    In my experience “bad” deployments are much less because someone is constantly pumping out code with the same number of bugs per deployment but because the deployment breaks how other systems interact with the changed system.

                                                    In addition fewer people under more stress, with fewer colleagues to put their heads together with, is likely to lead to more bugs per deployment.

                                                    1. 1

                                                      Not at all! More that… bad deployments are generally followed up with a fix shortly afterwards. Once you’ve got the system up and running in a good state, not touching it at all is generally going to be more stable than doing another deployment with new features that have potential for their own bugs. You might have deployed “point bugs” where some feature doesn’t work quite right, but they’re unlikely to be showstoppers (because the showstoppers would have been fixed immediately and redeployed)

                                                1. 9

                                                  No, this is stupid. Use the right interval for the job. I do have a strong bias for such half-open intervals, but they are obviously not always the right thing. In a recent project, I thought I was being clever by using [) intervals where most of the literature uses [] intervals; turns out that using exclusively either leads to breakage in certain edge-cases.

                                                  1. 4

                                                    It’s not obvious to me that the benefits from choosing a specific type of interval for each application outweigh the downsides from lack of consistency. Do you perhaps have any notational conventions that help alleviate these pitfalls?

                                                    1. 3

                                                      Consistency is not an excuse for giving incorrect answers.

                                                      Do you perhaps have any notational conventions that help alleviate these pitfalls?

                                                      I do not. Else-thread, it was suggested to use a name ending in ‘x’ to refer to an exclusive bound on a range.

                                                      1. 4

                                                        Consistency is not an excuse for giving incorrect answers.

                                                        I don’t follow. You can always rewrite a [] interval as a [) interval, can’t you? I don’t see why enforcing consistency has to lead to incorrect answers.

                                                        Here’s a different way to put this: can you give more details about why [] intervals were so much better in the project you’re discussing? (whereswaldon gives an example in another part of the comments, but I’m curious to hear yours.)

                                                        1. 3

                                                          You can’t naively choose the open bound for a closed bound because you need to know the implied precision. You can often infer it but not always. Especially tricky when you mix in floating point errors.

                                                          1. 2

                                                            You can’t convert an open bound to a closed one if your type has arbitrary precision.

                                                            Closed intervals are great and intuitive for all kinds of things. E.g. fetching the second to fifth elements in an array: A[2:5] in Julia (1-based indexing with inclusive ranges) vs A[1:5] in python (0-based with right-open interval).

                                                            Open intervals are good for stuff too, but I don’t think Dijkstra or this author give compelling reasons to always choose [) intervals. E.g. their argument for finding the length of an interval, if you have interval objects then you can just use length(2:5).

                                                            1. 1

                                                              It’s somewhat involved, and I don’t think the details are too important; it does involve nonintegers, as the siblings say. A 2-D vector rendering algorithm will exhibit artifacts if the bounds on its primitives are always half-open.

                                                              The GP asked about notational conventions—I realised I do have something to say here: in this project, I use bitmaps to indicate which bounds are inclusive and which exclusive. As a final preprocessing step, purely for performance reasons, I regularise the bounds, pursuant to the particular floating/fixed-point representation in use.

                                                          2. 1

                                                            not obvious to me that the benefits from choosing a specific type of interval for each application outweigh the downsides from lack of consistency.

                                                            To be fair the benefits in the reverse direction are not clear to me. I think I agree with the “right interval for right job” perspective, but this may be down to personal preference.

                                                        1. 5

                                                          I don’t deny most of the frustrations people feel with package management but some of these examples look too hard at one side of the ledger.

                                                          re: Firefox on Debian, there’s no reason the Firefox packagers could not take on the responsibility of packaging the newer versions of the dependencies giving them trouble. The conflict isn’t between multiple versions of a library. The conflict is on the shortest, coolest, default name. Usually the shortest, default name like ‘libxyz’ goes to the newest version and outdated versions get a version tag in their name. In some prominent cases the default name is abandoned:

                                                          sudo aptitude search '~i libpng'
                                                          i A libimage-png-libpng-perl                                                                          - Perl interface to libpng                                                                                    
                                                          i   libpng-dev                                                                                        - PNG library - development (version 1.6)                                                                     
                                                          i A libpng-tools                                                                                      - PNG library - tools (version 1.6)                                                                           
                                                          i   libpng12-0                                                                                        - PNG library - runtime                                                                                       
                                                          i   libpng12-0:i386                                                                                   - PNG library - runtime                                                                                       
                                                          i A libpng16-16                                                                                       - PNG library - runtime (version 1.6)                                                                         
                                                          i A libpng16-16:i386                                                                                  - PNG library - runtime (version 1.6)  
                                                          

                                                          When they become experienced, they stop seeing these flaws as actual problems and start seeing them as something more or less normal.

                                                          I don’t consider these pains normal but I do think distros have provided our industry with a very helpful forcing function. Many many many software packages have up-to-date dependencies because of the upgrade pressure provided by distros. This leads to a better overall security landscape and better compatibility between applications. The Firefox example is about when that effort worked in the other direction, pressuring applications to retain outdated dependencies.

                                                          containerization lets applications use their own dependencies

                                                          To my mind, the value of this statement depends on the balance distro packaging provides as an upgrading forcing function versus holding applications back from distributing against newer dependencies. I personally think web browsers, with their immense attack surface and consequent frequent updates, are a somewhat special case. Debian and other distros should loosen some of their dependency policy management to allow these special case programs to bundle private dependencies, similar to the way Firefox has already done it with their self-packaged, self-updating installation option.

                                                          1. 3

                                                            This conflict for “coolest name” causes practical pain for build systems, because you can’t just hardcode pkg-config libpng, or just specify the package in Ansible or whatever, because it’s sometimes libpng, sometimes libpng16, sometimes it’s some oddball package that requires running libpng-config instead of pkg-config, and all this will break if libpng ever releases 1.8.

                                                            Arbitrarily stuffing version number in the package name is just a crude workaround for package management systems that are missing the important multi-versioning feature. They should be able to find libpng under its canonical name, and let you specify which version you want. It’s silly to argue they don’t need the ability to have multiple versions of the same package on the same OS when they already do that, just poorly.

                                                            1. 2

                                                              They should be able to find libpng under its canonical name, and let you specify which version you want. It’s silly to argue they don’t need the ability to have multiple versions of the same package on the same OS when they already do that, just poorly.

                                                              This is not a Linux distro problem. It’s just the same problem that’s always existed with runtime dynamic linking, and has gone by many names (like “DLL hell”).

                                                              The only ways to “solve” it are:

                                                              • To isolate each application’s dependencies into a separate space that no other application will see or use. Which is a solution that gets attacked and laughed at and criticized as too complex when, say, a language package manager adopts it (see: node_modules, Python virtual environments), or
                                                              • To abolish runtime dynamic linking and force all applications to statically link all dependencies at compile time. This is the approach taken by Rust and Go, and most praise for how good/simple their “packaging” is really derives from the fact that neither language tries to solve the dynamic-linking problem.
                                                              1. 3

                                                                I think this should be a bit more precise. Both Rust and Go still use dynamically loaded libraries for ffi with all the usual issues described here. It’s not dynamic linking really, but in this context they work the same really, so I wouldn’t go as far as “abolish”.

                                                                The dependency isolation also could use point 1.1 with example of nixpkgs. It mixes both worlds by both isolating the dependencies which allows multiple versions and global sharing, which in turn prevents duplication and still has the effect of the ecosystem pushing applications to work with updated deps.

                                                                1. 1

                                                                  It is entirely a Linux distro problem, because all of these issues stem from the design they’ve chosen. Naming and selection of versions is entirely under their control. They have chosen to be only a thin layer on top of dumping files into predetermined file system paths and C-specific dynamic linking. This is very limiting, but these are their own limits they’ve chosen and are unwilling to redesign.

                                                                  BTW: Rust does support dynamic linking, even for Rust-native libraries. It doesn’t have officially-sanctioned standard ABI, and macros, generics, and inline functions end up in a wrong binary. This is technically the same situation C++ is in, only more visible in Rust, because it uses dependencies more.

                                                                  1. 2

                                                                    This is technically the same situation C++ is in, only more visible in Rust, because it uses dependencies more.

                                                                    This is not quite right. C++, like C, does not mandate an ABI as part of the standard. This is an intentional choice to allow for a large space of possible implementations. Some platforms do not standardise an ABI at all. On embedded platforms, it’s common to recompile everything together, so no standard ABI is needed. On Windows, there was a strong desire for ABIs to be language agnostic, so things like COM were pushed for cross-library interfaces, allowing C++, VB, and .NET to share richer interfaces.

                                                                    In contrast, *NIX platforms standardised on the Itanium C++ ABI about 20 years ago. GCC, Clang, XLC, ICC, and probably other C++ compilers have used this ABI and therefore been able to generate libraries that work with different compilers and different versions of compilers for all of this time. This includes rules about inline functions and templates, as well as exceptions, RTTI, thread-safe static initialisers, name mangling, and so on.

                                                                    1. 1

                                                                      Existence of a de-facto ABI still does not make templates, macros, and inlining actually support dynamic linking. It just precisely defines how they don’t work. These features are fundamentally tied to the code on the user side, and end up at least partially in the user binary. Some libraries are just very careful about not using macros/inlines/templates near their ABI in ways that would expose that they don’t work correctly with dynamic linking (having a symbol for a particular template instantiation is not sufficient, because a change to template’s source code could have changed what needs to be instantiated). Rust can do the same.

                                                                      Swift is the only language that has a real ABI for dynamic linking of generics that actually supports changing the generic code, but it comes at a significant runtime cost.

                                                                      1. 2

                                                                        Existence of a de-facto ABI still does not make templates, macros, and inlining actually support dynamic linking

                                                                        The Itanium ABI for C++ is not a de-facto ABI, it is a de-jure standard that has been specified as the platform ABI for pretty much every platform except Windows (Fuchsia has a few minor tweaks as do a couple of other platforms, but they are all documented standards). It is maintained by a group of folks working on compiler toolchains.

                                                                        It does support dynamic linking of templates. Macros have no run-time existence and so this does not matter, however you can declare a template specialisation in a header, define it in a DSO, and link it to get the instantiation. If you want it to work with a custom type, then you need to place it in the header, but then the ABI specifies the way that these are lowered to COMDATs so that if two DSOs provide the same specialisation the run-time linker will select a single version at load time.

                                                                        1. 1

                                                                          With the macro problem I mean a situation like a dependency exporting a macro:

                                                                          #define ADD(a,b) (a-b)
                                                                          

                                                                          And when the the dependency releases a new version with a bugfix:

                                                                          #define ADD(a,b) ((a)+(b))
                                                                          

                                                                          There’s no way to apply this bugfix to downstream users of this macro via dynamic linking. The bug is on the wrong side of the ABI boundary, and you have to recompile every binary that used the macro. Inline functions and templates are a variation of this problem.

                                                                          1. 1

                                                                            This is true but is intrinsic to macros and not solvable for any language that has an equivalent feature and ships binaries. You can address it in languages that distribute some kind of IR that has a final on-device compile and link step.

                                                                            You are conflating two concepts though:

                                                                            • Does the (implementation of the) language provide a stable ABI for dynamic linking?
                                                                            • Does a specific library provide a stable ABI for dynamic linking?

                                                                            For C and C++, the answer is yes to the first question, but there is no guarantee that it is yes to the second for any given library. A C library with macros, inline functions, or structure definitions in the header may break ABI compatibility across releases. Swift provides tools for keeping structure layouts mom-fragile across libraries but it’s still possible to add annotations that generate better code for library users and break your ABI if you change the structure.

                                                                            In C++, you can declare a template in a header and use external definitions to specify that your library provides specific instantiations. The ABI defines how these are then linked. Or you can allow different instantiations in consumers and, if two provide the same definition, then the ABI describes how these are resolved with COMDAT merging so that a single definition is used in a running program. If you change something that is inlined then the language explicitly states that this is undefined behaviour (via the one definition rule). The normal work around for this is to use versioned namespaces so that v1 and v2 of the same template can coexist.

                                                                            C++ provides a well-defined ABI on all non-Windows platforms and a set of tools that allow long-term stable ABIs. It also provides tools that let you sacrifice ABI stability for performance. It is up to the library author which of these are used.

                                                                            1. 1

                                                                              This is what I’ve meant by:

                                                                              Some libraries are just very careful about not using macros/inlines/templates near their ABI in ways that would expose that they don’t work correctly with dynamic linking

                                                                              1. 1

                                                                                But that’s where I disagree. All of those things do work correctly with dynamic linking. Macros trivially work, because they always end up at the call site. Inline functions are emitted as COMDATs in every compilation unit that uses them, one version is kept in the final binary at static link time and if multiple shared libraries use them then they will all point to the same canonical version. Templates can be explicitly instantiated in a shared library and used just like any other function from outside. If they are declared in the header then they follow the same rules as for inline functions.

                                                                                All of these work for dynamic linking in ways that are well specified by the ABI. The thing that doesn’t work with dynamic (or static) linking is having two, different, incompatible versions of the same function in the same linked program. That also doesn’t work with a single compilation unit: if you define the same function twice with different bodies then you get a compile error. If you define the same function twice with different implementations in different compilation units for static linking, then you get ODR violations and either linker errors are undefined behaviour at run time. Adding dynamic linking on top does not change this at all.

                                                                                1. 1

                                                                                  I think we have different definitions of “work”. I don’t mean it in the trivial sense “ld returns a success code”. I mean it in the context Linux distros want it to work: if there’s a new version of the library, especially a security fix, they want to just drop a new .so file and not recompile any binaries using it.

                                                                                  This absolutely can’t work if the security fix was in an inlined function. The inlined function is not in the updated .so (at least not as anything more than dead code). It’s smeared all over many copies of downstream binaries that inlined it, in its old not-fixed version.

                                                                                  1. 1

                                                                                    I mean ‘work’ in the sense that the specification defines the behaviour in a way that is portable across implementations and allows reasoning about the behaviour at the source level.

                                                                                    Your definition of ‘works’ can only be implemented in languages that are distributed in source or high-level IR form. Swift, C, and so on do not provide these guarantees because the only way that you can provide them is by late binding everything that is exposed across a library boundary and that is too expensive (Swift defaults to this but library vendors typically opt out for some things to avoid the penalty).

                                                                                    This is why, in FreeBSD, we build packages as package sets and rebuild all dependencies if a single library changes. Since a single moderately fast machine can build all 30,000 packages in less than a day, there’s no reason not to do this.

                                                                2. 2

                                                                  It sounds like you agree that traditional distro packaging is sustainable as a model but has fell behind on tooling. Multi-versioning package management tools would allow packages to continue to depend on older packages indefinitely.

                                                                  The point I was trying to get across was that the crudeness of the workarounds in the existing distros has acted as a forcing function to upgrade dependencies. That is partially a social thing because you don’t want to be the last package forcing crudely named libssl0.9.8 to stick around. It is partial a balance of effort question because the easiest thing for a packager to do is often to push patches upstream. Since outdated packages provide a greater attack surface (as pointed out in the article) and other problems, what I’m hoping to hear from Flatpak proponents: without distros, what will push maintainers to upgrade their dependencies? I get frustrated with conventional package managers but I will also be very frustrated in a few years when I have 12 versions of org.freedesktop.Platform (or whatever other major dependency) installed by Flatpak because I have ~15 applications that just won’t re-package on a newer platform version.

                                                                  1. 1

                                                                    Distros provide value in curation, security patching, and the hard battles with snowflake build systems required to make (mostly reproducible) binaries for every library. But the tooling used to get these benefits is IMHO past breaking point. Just telling everyone to stop using any dependencies that aren’t old versions of dynamically-linked C libraries isn’t sustainable. Manual repackaging every npm/pypi/bundler/cargo dependency into a form devs don’t want to use isn’t sustainable. Lack of robust support for closed-source software is a problem too.

                                                                    Getting ecosystem to move together is a social problem, but I don’t think shaming and causing frustration is the right motivator to keep it moving. The last package that depends on a dead version of a library is usually like that because the author has quit maintaining it, so they won’t care either way. And if the tooling makes this painful, it only punishes other users who aren’t in position to fix it.

                                                                    There are no silver bullets here, but Linux distro tooling has room to improve to make it easier. Instead of complaining that all these new semver-based multi-version-supporting package managers developed for programming languages make it too easy to add dependencies, they should look into making dependencies “too easy” in Linux too.

                                                              1. 3

                                                                Correct me if I’m wrong, but doesn’t git basically work in some way using linked lists to do certain things?

                                                                1. 6

                                                                  Architecturally, git is a key-value database of objects that represent an acyclic graph of commits and a tree of directories/files. A simple case of linear commits is a linked list indeed, but that’s not the programming-language-level linked list that the post is about.

                                                                  1. 2

                                                                    Okay that makes sense about commits. How did you learn about the inner-workings of git?

                                                                    1. 3

                                                                      I’ve found the official (free) book to be an excellent source.

                                                                      https://git-scm.com/book/en/v2

                                                                      Obviously not every part is relevant to you, skip what isn’t, but I found it generally well written and useful.

                                                                      1. 3

                                                                        This is another great resource for learning how git works internally: http://aosabook.org/en/git.html

                                                                        Implementing something with libgit2 is another good way to learn the finest of the details. It’s thrilling to make a program that can construct a branch, tree, commit, etc and then have git show it to you.

                                                                      2. 3

                                                                        I’ve learned it the hard way, but these days there’s a bunch of tutorials about git inner workings. I highly recommend learning it, because it makes git make sense.

                                                                      3. 1

                                                                        but that’s not the programming-language-level linked list that the post is about.

                                                                        The only difference I see is that it’s implemented in the file system instead of in memory.

                                                                        1. 2

                                                                          The arguments against linked lists are about memory cache locality, lack of CPU SIMD autovectorization, CPU pipeline stalls from indirection, etc., so the memory vs file system difference is important.

                                                                          Linked lists on a file system are problematic too. Disks (even SSD) prefer sequential access over random access, so high-performance databases usually use btrees rather than lists.

                                                                          git’s conceptually simple model is more complicated when you look at implementation details of pack files (e.g. recent commits may be packed together in one file to avoid performing truly random access over the whole database).

                                                                          1. 1

                                                                            Thanks for that context!

                                                                      4. 4

                                                                        Yeah, Git is similar to a Merkle Tree, which shares a lot in common with a single linked list, in that from HEAD you can traverse backwards to the dawn of time. However it differs because merge commits cause fork/join patterns that lists aren’t supposed to have.

                                                                        1. 1

                                                                          Interesting. I was looking into how to reproduce merge commits (oids) from someone else’s working tree that push to the same bare repo (e.g. on Github). I was forced to calculate a sha256 to verify that the actual committed files are the same between to working trees. Know there must be a lighter more efficient way. Probably would be a real nasty looking one-liner though.

                                                                      1. 2

                                                                        Silicon Zeroes was fun. It isn’t quite programming. More like basic IC building. The scaffolding from early levels to later levels was really well planned out. Feels like an intro text book but fun :)

                                                                        1. 1

                                                                          I’ve played this! It’s a great off-brand Zachtronics game :)

                                                                        1. 7

                                                                          Missing from this article but worth considering: if your read-replicas have any lag at all (very likely) how can you ensure users see their updated data after they perform a write?

                                                                          On a site like Lobste.rs for example it’s a nasty bug if a user posts or edits a comment and then can’t see their update on the subsequent page, because it was served from a read-replica that didn’t have the change yet.

                                                                          The simplest solution I’ve seen for this problem involves cookies or sessions. Any time a user performs a write action (posting a comment for example) you set a cookie with a very short expiry - maybe 5 seconds or so.

                                                                          Any user with that cookie is “pinned” to the read-write connection, which guarantees they will see the results of their write unaffected by replication lag - provided that lag stays below the 5s value you picked!

                                                                          There’s a slightly more complex version of this which I’ve seen implemented for Wikipedia: maintain a global transaction counter that increments with every transaction. When a user performs a write, record that counter to their cookies or session. On subsequent reads, check the value of that counter in the read replicas and only serve that user from a replica that has caught up with that point in time - or route to the lead if no replica is there yet.

                                                                          1. 6

                                                                            That second approach sounds like a bespoke vector clock, enforcing a sort of causal consistency. Some database products have this concept built into their clustering approach. Each write operation returns the vector clock value so long as you pass it back into the next query the cluster will re-route the query or delay execution until the replica catches up. I’d like to see broader support for this built directly into Postgres libraries.

                                                                            1. 1

                                                                              For context, we are using AlloyDB and lag is non-issue. That being said, I realize how this might be an issue for others.

                                                                            1. 2

                                                                              Are GUI builders really that useful? I did some GUI tools and a GUI app in the past and I’ve never understood what makes a GUI builder a better tool than just writing widget code directly in the editor. Assuming that the programmer creates a non-trivial application that contains GUI that is actually based on some state (shown/hidden panes, splitters, animation), is it really an advantage to have some windows in the GUI Builder, and some directly in the code?

                                                                              New GUI-oriented frameworks actually deliberately skip having a GUI builder (i.e. Flutter), and design their tools around writing the GUI in the code, and using the hot-reload pattern to instantly apply the code to a graphical output. So I’m wondering are GUI builders a pattern that is worth investing in?

                                                                              1. 7

                                                                                I used to feel the same way, but I’ve softened a bit. I’m now finding that putting the GUI purely in code has a bit of a bathtub curve to it. Like you, I find that GUIs that escape “Hello World” levels of complexity are often easier to handle in code than in a builder, since you hit a level of interactivity that escapes the model of the builder. However, as the GUI continues to grow, there’s often a growing amount boilerplate code for defining the GUI. Code that would be better expressed in a DSL. The breakthrough moment for me was realising that the native file format of the GUI builder can be that DSL. Now, depending on the builder and its format (some of which are truly execrable), you can get quite a bit of complexity before you reach the point where the builder is worthwhile. However, in the best case, reading the diffs of the builder files while doing a code review can be quicker and clearer than reading the diffs in the code, simply because you’re using a language designed for the job.

                                                                                1. 6

                                                                                  One big advantage is iteration cycles, especially if you’re new to a GUI library. I’ve recently been writing a GUI app with GTK and rust. While I’m not new to GTK overall, I am new to GTK4 and some of the GTK3 the components I’m relying on. Waiting for rust linking between each iteration has been painful enough that I have considered switching to Glade. Unfortunately Glade’s support for the components seems perpetually less than full coverage. Whenever I try it I can only make about 80% of the GUI using the builder and then I have a mix of both approaches in my project, which can be confusing in its own way.

                                                                                  1. 4

                                                                                    I encountered the same slow linking as you, and fwiw I had great success leaning on mold as my linker: my change-rebuild cycle is less than a second.

                                                                                    My windows builds are still slow (even with llvms linker), but I’m tempted to try this technique posted by Robert Krahn. Maybe it’ll make your situation much better.

                                                                                1. 2

                                                                                  This approach has served me better than anything else over the past 20 years. I’ve phrased it a few different ways. I used to say “if you need more than one conjunction when reading the code then you need more than one line” but I got away from that as I learned just how few people actually re-read their code back to themselves.

                                                                                  1. 49

                                                                                    I am not an X11 apologist. I think that Wayland and systemd both exist because the things that they’re replacing have a lot of flaws and aren’t the right solutions for modern use cases. I just don’t think that they are the right solutions to the (real) problems in the things that they’re trying to displace and I object to the framing of ‘if you don’t like the new flawed thing then you must be an advocate of the old flawed thing’.

                                                                                    1. 27

                                                                                      Congrats, as demonstrated already, by taking the reasonable course you have opened yourself up to annoyances from both extremes.

                                                                                      1. 18
                                                                                        1. 7

                                                                                          Well, X does have some problems. But I find most people don’t actually identify them. What’s most interesting when you start getting specific about what problems actually are is that solutions start to emerge at the same time. If you stop at “X sucks” without examining why, your solution is “discard X”, but if you say “X sucks because Y”, then it opens up another solution: fix X by changing Y.

                                                                                          Last year, I had a similar thing with my little gui library. I spent several years writing it and had grown to kinda hate it. Was tempted to rewrite it, but I knew from experience that I’d probably just waste years and end up in the same mess. So instead, I sat down and asked myself in detail just why I had grown to hate it… and that started leading to ideas to fix it. After a while, not only did I not rewrite it, I fixed most my problems without even any significant breaking changes! (one of them did require a breaking change, but thanks to getting specific, i was able to also provide myself a smooth migration path)

                                                                                          and btw in the process of writing my gui libs, i’ve gotten quite a bit of experience working with X. Much of it is not terribly well documented at this level but once you get to know it, there is logic behind the design decisions, and some of it is actually really quite nicely designed.

                                                                                          1. 1

                                                                                            I don’t represent either extreme; I was merely curious.

                                                                                            1. 1

                                                                                              That’s reasonable, it was just a very entertaining set of responses. XD

                                                                                          2. 7

                                                                                            On the one hand, we have this decades old software that has flaws that many people have experienced with. On the other, we have this newly invented piece that claim to fix the obvious flaws of the old and introduces new ones that early adopters still struggle to identify. Instead of ditching the old and embracing the new, I surrender to the flaws of the old. At least they are well understood. I would rather wait until enough time has passed, and people have faced most of the flaws of the new. Who knows, decades later, there will be new kids in town throwing out wayland and systemd, while inventing their own shining new toys. And there may still be people running X11 and init(8).

                                                                                            1. 6

                                                                                              What do you think is wrong with Wayland?

                                                                                              1. 8

                                                                                                As a user, the first thing they sacrificed was something I’ve relied on most weeks for the past 15 years: transparent networked clients. Having remote applications integrate into the desktop as normal windows is a huge usability improvement over remote desktop viewing. Critically though, I rely on running GUI applications without a full set of desktop-related software installed. I want to run these applications without the need to actually maintain a desktop on each of the machines.

                                                                                                As both a fan of open development models and a software engineering manager, I think Wayland essentially declared tech debt bankruptcy without solving the organizational challenges that led to a lot of the issues with X. The original proponents of Wayland say they can’t just stop supporting problematic X extensions but also don’t explain how Wayland protocols will be sunset. The best I can tell from reading their governance documentation is that protocol support is essentially a popularity contest and stable protocols will be sunset by removal of projects from the membership and thus preventing their objection to pull requests. Their README says:

                                                                                                A protocol may be deprecated, if it has been replaced by some other protocol, or declared undesirable for some other reason. No more changes will be made to a deprecated protocol.

                                                                                                Elsewhere in the document it says:

                                                                                                … and deprecated protocols are placed in the deprecated/ directory.

                                                                                                Hilariously, that directory does not exist. For a project borne of a frustration with the previous technology’s stagnation and inability and/or unwillingness to evolve, I would expect more attention paid to how users will be expected to navigate an ecosystem of heterogeneous support for deprecated protocols. I’ve concluded that while Wayland does make some technological advances, it exists as a project mainly for the purpose of unseating incumbents who seemed reluctant to evolve X toward the simplified Wayland architecture.

                                                                                                1. 3

                                                                                                  As a user, the first thing they sacrificed was something I’ve relied on most weeks for the past 15 years: transparent networked clients. Having remote applications integrate into the desktop as normal windows is a huge usability improvement over remote desktop viewing. Critically though, I rely on running GUI applications without a full set of desktop-related software installed. I want to run these applications without the need to actually maintain a desktop on each of the machines.

                                                                                                  Wayland doesn’t make this impossible, it’s just not part of the core protocols. Waypipe can be used for that: https://gitlab.freedesktop.org/mstoeckl/waypipe

                                                                                                  1. 1

                                                                                                    Yeah I’ve tried waypipe. Unfortunately the launching conventions are all jumbled now so while it generally works for executing a program directly any programs that program executes is much less reliable. I suspect it’s mainly due to the session type environment variable not being passed on. The net result is a more frustrating experience than X though. Anecdotally it feels slower too. That might be due a lack of focus on optimization and destined to be fixed but right now it is disappointing.

                                                                                                2. 5

                                                                                                  It’s been a while since I looked at it in detail, from what I remember (any or all of these may be wrong):

                                                                                                  They originally conflated copy and paste with drag and drop in a way that made low-latency drag almost impossible. I think this is fixed.

                                                                                                  By putting network transparency out of scope, they make it hard for a client to use lossless compression for rendered widgets but stream a video in a rectangle and so on. I think Pipewire just compresses the whole window, so a video is decompressed to a texture, sent via a shared me,pry transport, recompressed, and then sent to the remote display, then decompressed. By moving GL and audio out of the server, they make it hard to remote these and make it very hard to control audio and video latency for synchronisation.

                                                                                                  A huge amount is out of scope of the protocol, so everything ends up depending on compositor-specific extensions (screen shots, on-screen keyboards, and so on).

                                                                                                  My main problem with Wayland is that it doesn’t really seem to solve any of the problems I have with X11, it just says that they’re out of scope and should be built externally.

                                                                                                3. 6

                                                                                                  What are the problems with X11 for modern use cases?

                                                                                                  1. 10

                                                                                                    The big one is the (lack of) security model. There are some security extensions for X11’s, but enabling them breaks a lot of stuff. Things like on-screen keyboards rely on being able to inject key-press messages into any other X client. It’s been a few years since I did any low level X11 programming, but I seem to recall that it was impossible to attribute a window to a connection (xkill is fun here: it relies on untrusted metadata to choose the thing to kill), which makes it hard to enforce other policies. It would be possible to add a capability model to X11, but it would come with a lot of compatibility problems.

                                                                                                    X11 has a lot of state, split between the client and the server. This makes it hard to restart the server without breaking clients (there are some proxy things that try this, but they don’t work well). It kind-of does network transparency, but not in a great way: MAS tried to add audio but it was never merged, so a typical X11 app doing remote display will show pictures on one computer and play audio on another. It doesn’t have anything in the protocol for streaming compressed video, so if you try watching a movie over remote X11, it’s decompressed on the client and streamed to the X server uncompressed. You might be able to use AIGLX to work around this if you use some non-standard vendor extensions to upload video to the remote GPU and decompress it there, but now you’re way outside of portable use of X11.

                                                                                                    The state and the lack of client identification means that you can’t do the sorts of tricks that Quartz does in the compositor, where the server takes ownership of a window when a client dies and allows the client to reconnect and resume ownership. This is why Macs appear to boot in a few seconds: most of the apps haven’t relaunched and will be launched in the order that you start interacting with the windows.

                                                                                                    Nothing in X11 is unfixable but the thing you’d end up with would probably not gain any benefits from being built on top of X.

                                                                                                    1. 9

                                                                                                      Wayland is worse off in many other ways, the absent security model is just icing on an already moldy cake. I couldn’t come up with a better surreptitious API design for encouraging memory vulnerabilities even if I tried; it is a use after free factory by design and the clients get all the primitives they could wish for to control heap layout and object reuse. It reads like an overcomplicated CTF challenge.

                                                                                                      Actually worse is the poorly conceived wire format that isn’t even capable of handlings its own data model (see the dma-buffer modifier int64 pack/unpack shenanigans for instance) – and the object model (much bigger state explosion than X11, asynch-java-like OO for C producer and consumer is a spectacularly bad idea) led to dispatch designs that are some of the worst around. At one point in the timeline, libinput artificially rate-limited “gaming mice” as the event storms from moving the cursor over a window would cause buffers to fill up, and failing to writev just close() clients that then promptly crash from some NULL deref because there is less meaningful error handling around than even if (!malloc…).

                                                                                                      I spent quite some time investigating this, https://arcan-fe.com/2017/12/24/crash-resilient-wayland-compositing/ and took it much further later on, but https://gitlab.freedesktop.org/wayland/wayland/-/issues/159 remain forever in my heart. There is so much of this around. What did they get right? it’s not X. Instead, it’s what you get if you cheat and skip doing your homework by trying to copy MSCOM and SurfaceFlinger – acting like that still wouldn’t leave the FOSS desktop decades behind.

                                                                                                      Nothing in X11 is unfixable but the thing you’d end up with would probably not gain any benefits from being built on top of X.

                                                                                                      So I eluded to one graceful path a while back, https://www.divergent-desktop.org/blog/2020/10/29/improving-x/ , fairly certain that would never happen. There are a few other options not that far away, one that I sat down and implemented most of a year or two ago; wrote the article and then decided not to publish. It turns out it works – there is a way to let Xorg remain Xorg until display density + display sizes overflow the hardcoded coordinate space (then it gets really hairy), get most of the benefits of the lower layer changes to the stack, and overlay much better security controls – it is not even that hard. The problem is that it still won’t undo the harm already caused by GTK and D-Bus.

                                                                                                      There are more interesting things to focus on. It is not particularly appealing to adopt the hellthread-writers wayland now absorbs (without any meaningful compensation to top it off). Come to think of it, Wayland (the project) has that over X, it is a good abuse filter.

                                                                                                      1. 3

                                                                                                        Thanks, that’s fascinating detail. Everything I’ve read about Arcan suggests that it is actually solving the right problems (any year now, I hope to have enough free time to properly play with the code and especially to see how it would play nicely with Capsicum and how well SHIM-IF works with CHERI). For some reason, the name ‘Arcan’ doesn’t stay stuck in my head - I meant to refer to it earlier but I couldn’t persuade a search engine to help me find the name of the project.

                                                                                                        I can quite easily see, for example, how an Arcan client could display a window containing a video stream where the widgets were rendered on the client and the video is decoded on the display server. I can kind-of see how X11 could evolve to support this, though not in a way that would work with any existing toolkits. I have absolutely no idea how Wayland could ever get there.

                                                                                                        1. 2

                                                                                                          So it can be solved in Wayland by defining another buffer-format, with either SHM or DMA Buf transport (there’s tons of them in there, just rarely implemented or used as everyone just went ARGB888 then realised they didn’t define a colourspace for the buffer to map. One could even, I don’t know.. add SVG in there ;)) and using wl_subsurface with the buffers attached.

                                                                                                          In practice, the (hardware-implementation-defined) zwp-linux-dma-buf-v1 layering violation “protocol” basically does that. GPU buffers are compressed in the weirdest of formats for the strangest of reasons and very often looks eerily similar to video. This is a long standing tradition, MPEG2 accelerators in the Hollywood+ style cards from the early DVD era had similar properties (I reversed drivers/framebuffer formats of that for fun and time wasting), basically MPEG2 I-Frames as a framebuffer format, so if you wanted to use it as part of a HTPC box or something, you needed to render to that.

                                                                                                          It becomes much tougher to have graceful fallbacks when / if that allocation asynchronously fail, the video decoding crashes, has broken frames and so on. The delegation model in Arcan (“frameservers”, homage and evolution of Avery Lee’s virtualDub and MSWIN GraphEdit era) covers that though and that is actually where much of the designs evolved; ways of sandboxing mid 2000-era ffmpeg as it was basically impossible to share a memory space with it without getting punished for it).

                                                                                                          What is much trickier for Wayland in this regard is to spawn of a child video decoder and have that process embed its video output as a subsurface of the parent without tons of things breaking. There are “attempts” ( https://wayland.app/protocols/xdg-foreign-unstable-v2 ) but it is again very fragile and complicated because their building blocks are fundamentally and irreparably flawed. Closely related, this is also why they are not capable of having a window management process synchronously managing synchronisation, display parameters and window stacking.

                                                                                                          Among the things going on in Arcan right now is this effect for the network protocol level (A12) where we do try to have dynamic “local delegate transcodes media” that also deal with passthrough if sink supports along with muxing (don’t want clipboard cut and paste of a 100G file be line-blocking) and safe fallbacks.

                                                                                                          1. 2

                                                                                                            And now I really want to play with Arcan and Capsicum.

                                                                                                      2. 1

                                                                                                        OK, yeah, those are pretty reasonable. A lot of people annoy me with the security thing saying it isn’t there and is fundamentally impossible, both of which are false… but what’s there doesn’t actually deliver on being easy to use. It is like someone wrote the hooks then said “ok someone else figure it out from here”, but by then most the people who were interested in figuring it out from there had already jumped ship to Wayland or whatever. Additionally, some of the conventions on X would be very difficult to use with stricter restrictions. Drag and drop comes to mind, since the client needs to ask which mouse is under the cursor, which is a thing that might be restricted. With some kind of middle man proxy or just a permission popup I think you could make it work, but again, the U missing from the UX here; nobody has bothered working out those details.

                                                                                                        Though…. this also is something that doesn’t really bother me, since I just don’t run applications that do evil things. (And I consider evil things to include something as simple as raising itself to the top of the window stack. I’ll hack that crap right out of the code, i can’t stand it). And if I do have to run something more iffy, I’ll put it in a Xephyr session to isolate it. Though another thing I would like to see is a nested server be better about resizing. Xephyr does an ok job but again the ux i think could really be improved with a little more polish, by putting a special window manager in there. If I had more time and/or it became a higher priority cuz of a required application or something I might look into it myself.

                                                                                                        The state concern is completely fair too. In my library, I make an effort to keep a copy of the server-side state in order to support display migration. If it disconnects, it can try to reconnect to a different X server and recreate its assets there and carry on. It actually sometimes works! … but not all the time, and it was a bit of a pain to code up (especially cuz xlib assumes a display connection is fatal. i kinda wanted to redo my own xcb based thing but again never got around to it so i hook the error handler and throw an exception to regain control. lol) When it works, I think this is a really cool feature - giving guis one of the big benefits of terminal applications with gnu screen and friends. Yes, yes, I know there’s RDP and VNC and whatever, but I really hate using those (even though I have a lot of respect for RDP’s achievements. Once upon a time, 2010ish, I was running xming on a Windows box on my lan and RDPing to it remotely because the X app performed soooo much better on that network link through Windows than it did directly. and then you get some sound, printers, files, etc too!). I often have a mix of local and remote programs and want them to actually integrate the window stack order etc., so I prefer X’s model to RDP’s. And you know, if the XResource promise worked, the application could even retheme itself when migrating servers to blend in locally! But I never even actually did a proof of concept for that. Pretty sure it is doable though.

                                                                                                        And once more, I agree it would be nice if the image transfer formats had more options. Should be easy as an X extension. Even just png and jpeg would be really nice to have for some cases, and some kind of video support too might eliminate some cases where I run local clients in addition to remote. Though this is something I can live without since my internet speed is 25x faster now than it used to be and I don’t pay by the kilobyte anymore so the economic pressure has passed. But yeah, still would definitely be nice.


                                                                                                        As for what building this on top of X would bring instead of doing your own thing: a lot of it is just compatibility with all the existing programs. There’s a lot to be said about things that have just worked for many years continuing to just work. But also there’s a lot of little corner cases that have been figured out that are good to keep working. I’d try to make a list, but that’s kinda the point: what would I forget to put on that list until someone at some point brings it up as a bug report? As a project matures and has more users, that list of corner cases gets bigger and bigger and if you throw it out, you’ll inevitably just recreate it eventually anyway, and in the mean time, things will be broken and users annoyed.

                                                                                                        1. 2

                                                                                                          As for what building this on top of X would bring instead of doing your own thing: a lot of it is just compatibility with all the existing programs

                                                                                                          This is the place where I kind-of agree with the Wayland folks. Existing software doesn’t just expect something that speaks the X11 protocol, it expects something that speaks the X11 protocol and does specific things with it. If you break some of those things, you’re breaking compatibility, even if you still speak X11. You can run an XServer with a Wayland or Arcan (or Windows, or Quartz) back end and the Windows one in particular demonstrated that you can have things like copy-and-paste and integration with a foreign windowing system work very well. Arcan and Haiku have shown that you can put the X emulation layer much closer to the client (even in process with it in the case of Haiku).

                                                                                                    2. 2

                                                                                                      I have the same pov except with the addition that I think both systemd and wayland are not softwares that you can improve to “fix them” - they have totally wrong assumptions to begin with - so I am against moving the mindshare in that direction which makes me somewhat of an advocate of the old flawed thing.

                                                                                                      1. 1

                                                                                                        I would love to hear your thoughts on systemd, given your work on Casper/CHERI.

                                                                                                      1. 54

                                                                                                        There was no free speech outrage when Cloudflare has kicked out Switter. SESTA/FOSTA is a real example of overreaching legislation having a chilling effect. That’s when you should have been outraged about actual threat to free speech.

                                                                                                        This case is a private company choosing not to do business any more with a toxic customer (and it’s a shame they did it so late and so begrudgingly).

                                                                                                        Never forget that Cloudflare is only cosplaying being a utility. No matter what they blog, they are a private company. Until they are regulated as an actual utility with real obligations, they can kick anybody at any time (and absolutely should use that right kick out terorrists).

                                                                                                        1. 5

                                                                                                          Not just a private company but they have been publicly traded since 2019 which means shareholder profits generally come before anything else.

                                                                                                          1. 4

                                                                                                            That’s… not really how public companies work. Obviously shareholder value is a large consideration, but the companies are still run by people democratically elected by the shareholders and can represent them in ways other than money. This is an example of that

                                                                                                            1. 4

                                                                                                              That is part of the story told about corporate governance in general but the structure of of Cloudflare’s shareholder arrangements is far from democratic. Similar to Facebook, there is no practical shareholder control or board oversight. The company is run by the founders as a practical matter. The linked blog post specifically says they are doing this only due to their own legal exposure, heavily implying they would incur significant financial risks, in terms of litigation.

                                                                                                              1. 3

                                                                                                                A tangible example of this is that one thing used to pressure CloudFlare were the Supplier Code Of Conducts in place at a lot of those companies that make sure company ethics are upheld even in their supply chain.

                                                                                                                Those are often public companies and certainly, a lot of those SCOCs are often there for legal reasons (child labour, tax evasion, etc.), but having received trainings for a few of them, many also speak company values.

                                                                                                            2. 3

                                                                                                              Neither the country where I was born nor the one I currently live have utilities, as a matter of law. It has them in practice, of course, which suggests that the legal category isn’t strictly necessary. And that, in turn, suggests that the class may not be a simple definition.

                                                                                                              IMO the banks are one here, even if each bank isn’t. Being able to have a bank account is more or less required for an adult, even if a customer relationship with any particular bank isn’t. Having a bank account in other countries may be less important, of course.

                                                                                                              The food shops are one. Not being able to get a phone would also be such a problem (in the unnamed place where I love) that IMO the telcos form a utility (in this place) even if each telco is a regular company.

                                                                                                              This is a programming/tech site, so my purpose isn’t to ramble about Carrefour or Nestle. I want to suggest that tech companies have a duty to consider in which forms they have such public duties. I see from the blog post that Cloudflare ① thinkls that security providers shouldn’t ditch anyone and ② hosting providers should not have that same policy. This is a considered stance, and I applaud it for being considered.

                                                                                                              FWIW, I once worked for a small company formed by 100% conscientous objectors. We didn’t want to sell to the arms industry, and considered it, and decided that we’d serve any customer who sought us out on their own, but make no attempt at all to sell to the arms industry, or upsell if we got any such customer. A considered stance.

                                                                                                              1. 6

                                                                                                                The free speech talk is usually about a hypothetical future US government turning against its own people, and it’s easy to have a principled stand against something that only exists in people’s heads. Meanwhile, in the physical world, operating in Russia and China already makes corporations subject to the real anti-free-speech laws. There are tyrannical governments that are actually violently silencing their dissents today, and continued presence of corporations in these countries means they are complying.

                                                                                                                1. 1

                                                                                                                  The free speech talk? Does Cloudlfare talk about free speech? I missed that part (admittedly I haven’t followed it closely). Googling now, I find a commitment to a free and open web, but as far as I understand, that doesn’t refer to free speech, but rather to open specifications and a practical ability to host web sites wherever and however you want. Or more practically, to host web pages outside the FAANG server farms.

                                                                                                                  Regarding free speech, Cloudflare appears to think that any web site operator should be able to get DDoS protection and DDoS protection providers shouldn’t drop sites. But I don’t see any commitment to hosting anything in particular. Cloudflare takes a strong stance against limiting free speech by means of e.g. DDoSes, but I don’t see any commitment at all against legal means. Am I overlooking anything? Feel free to provide a link.

                                                                                                                  (I suppose a shitstorm counts as a kind of level-8 DDoS.)

                                                                                                            1. 8

                                                                                                              Once I figure out how (and do some more checking), I will try to submit a pull request. I wish I understood git better, but in spite of your help, I still don’t have a proper understanding, so this may take a while.

                                                                                                              If Brian Kernighan feels this way, we can all feel better about headaches with git.

                                                                                                              1. 11

                                                                                                                If Brian Kernighan feels this way, we can all feel better about headaches with git.

                                                                                                                But is he talking about git, or about Github? The term “pull request” as Github uses it confuses me to this day, while I generally don’t have problems with git itself.

                                                                                                                1. 2

                                                                                                                  But is he talking about git, or about Github? The term “pull request” as Github uses it confuses me to this day, while I generally don’t have problems with git itself.

                                                                                                                  You may be right. The blurred edges between git and GitHub sometimes confuse me too.

                                                                                                                  E.g., I only recently realized that if you are in the middle of a pull request on GitHub, and you want to include further changes in the same pull request (e.g., you forgot to update docs and the README), the best thing to do is to use git commit --amend and then force push to the branch on your repo where the pull request originates. If you do that, then GitHub will automatically update the original pull request with the further changes, and everything goes smoothly. On the other hand, if you make the changes as a new commit and then push that to the branch, the new commit is not reflected in the pull request. I suppose this makes sense in one way: the pull request was made starting from a certain commit, and it only runs up to that commit. On the other hand, force pushing in the middle of a pull request initially feels dangerous and potentially destructive.

                                                                                                                  1. 12

                                                                                                                    if you make the changes as a new commit and then push that to the branch, the new commit is not reflected in the pull request.

                                                                                                                    I use both approaches regularly and each leads to my changes being incorporated into the pull request.

                                                                                                                    1. 1

                                                                                                                      I use both approaches regularly and each leads to my changes being incorporated into the pull request.

                                                                                                                      Thanks for the note. I would have sworn the new commit method didn’t work for me recently. But I may have done something else wrong, or maybe I misunderstood what was happening to the PR. In any case, I’ll keep that in mind for next time.

                                                                                                                      1. 2

                                                                                                                        I have had problems in the past where I thought I was pushing to one remote branch, but actually I was pushing somewhere else. This is a pretty central UX failure of git: the whole concept of remotes and locals is needlessly confused.

                                                                                                                        If I were in charge of git, branches would have relative and absolute addresses. local/main would be the absolute address of your local main vs. origin/main. As it is, in git today there’s origin/main, but no way of saying “local/main”. Then there’s the fact that origin/main is only sort of remote. It should just be origin/main is absolutely remote and any time you try to interact with it, git will behind the scenes try to do fetch and only if that fails, use the cache but put up a warning that it couldn’t fetch it. Instead, git will sometimes talk to origin and sometimes not, and you just have to know which commands do or do not cause a fetch. Then the whole concept of branches and upstreams is needlessly baroque. There’s a lot of implicit state in git—when you commit, does the commit update a branch or just make a detached commit? where does it rebase from or push to if you don’t specify?—that has a lot of command specific terminology. It should just be that there’s your current default branch and your current default upstream and then you can change either on an ad hoc basis for any command by doing –branch or –upstream, but instead you just have to memorize how each command works. It’s an awful UX all around.

                                                                                                                2. 1

                                                                                                                  Has there ever been an explanation to git’s interface? I feel getting historical context would help in creating the right frame of mind for using it for a lot of people. I know how to use git well but I would still love to read this.

                                                                                                                  More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                                                                                                  1. 5

                                                                                                                    More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                                                                                                    Not especially involved, but I always thought that Ward Cunningham’s expense calculator was very clever.

                                                                                                                      1. 3

                                                                                                                        More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                                                                                                        Here’s a 3D game written in gawk.

                                                                                                                        1. 1

                                                                                                                          More on topic, let me try to start a discussion: what’s the most involved awk programs people have come across? I feel the language is heavily under utilized.

                                                                                                                          At my first software engineering job in the digital mapping world we had a data quality check tool written by someone who only new shell/awk. This thing was a 2000k like gawk and ksh mish-mash. It worked and it was quite well structured, but I still feel it would have been a bit easier to maintain in another language. This was in the early 2000s.

                                                                                                                      1. 1

                                                                                                                        Ignoring the user experience as envisioned here, I think it would be tough for any organization to rely on incoming email to be their primary point of entry. It’s had a reputation for being unreliable and when something goes wrong (e.g. bring mistakenly blocked) you have very little direct control. Similarly, while there are methods of authenticating incoming email, the alternative contact mechanisms (e.g. cell phone) don’t have a similar level of authentication.

                                                                                                                        I’d propose a middle ground. Instead of signing up with a bunch of details and then verifying that email address, I think it would be an improvement just to ask for account details after verifying the email address. That would eliminate the biggest issue with the verification step: it lets you down after you’ve already emotionally committed to using a service.

                                                                                                                        1. 18

                                                                                                                          I wish someone who debugged this would give more of an explanation for the crash. Given the era I presume this meant the dreaded blue screen. But why was the system unable to deal with a faulty hard drive? Did a timed out read in a VMM path cause a blue screen? I’ve blasted Linux for running too long with a faulty drive that was spewing IO errors into the logs for months without me noticing. Maybe the drive had a second resonant frequency that caused a perpetual delay such that even retires failed? I would love a concrete description from the windows OS dev perspective.

                                                                                                                          1. 8

                                                                                                                            It depends on where the failure happens. One key difference between the NT and Windows kernels is that almost all kernel memory in NT is pageable. If you hit a timeout trying to bring a kernel page back into memory that’s needed on an interrupt-handling path or to release a lock, then you may well crash. I suspect that, if it’s caused by resonance, the drive might report a read error because the head jittered during the read, rather than reporting a recoverable error. In this case, you’d be unable to satisfy the page fault and have no option but to die.

                                                                                                                          1. 2

                                                                                                                            For a more serious and comprehensive survey of IPv6 adoption motivations and obstacles, folks may be interested in this old IPJ issue: https://ipj.dreamhosters.com/wp-content/uploads/issues/2011/ipj14-1.pdf

                                                                                                                            1. 6

                                                                                                                              Filippo Valsorda response to this, or via original Twitter link.

                                                                                                                              1. 6

                                                                                                                                “Don’t ask any questions about the intentions of the known-malicious entity which has recommended secretly known-weak cryptography multiple times in the past on behest of the NSA. Don’t use your legal rights to scrutinize the government. Trust that the NSA and NIST has your best intentions at heart, citizen.”

                                                                                                                                Yeah no. It’s the NIST’s responsibility to prove themselves to no longer be malicious. So far, they haven’t.

                                                                                                                                1. 4

                                                                                                                                  This seems disingenuous. Bernstein doesn’t accuse anyone of bribing researchers, he accuses the NSA of hiring them which makes bribing them unnecessary. I think that’s just a matter of public record.

                                                                                                                                  1. 9

                                                                                                                                    The underlying things here are that A) a FOIA suit is a pretty standard thing and is not evidence of malice or evidence that the claims advanced about the contest are true (lots of agencies mess up FOIA, for reasons which often are banal, and get sued over it), and B) the documents obtained from it are almost certainly not going to provide any evidence for the claims, either.

                                                                                                                                    There are basically the following possibilities, in what I think is decreasing order of probability:

                                                                                                                                    • He wins the FOIA suit and receives the full set of requested documents and they don’t contain any references to nefarious NSA behavior, in which case he can say that he’s being stonewalled and the real documents would vindicate his claims.
                                                                                                                                    • He doesn’t win the FOIA suit and doesn’t get any documents, in which case he can say that he’s being stonewalled and the documents would vindicate his claims.
                                                                                                                                    • He wins the FOIA suit and receives a partial or null set of documents with no further explanation, in which case he can say that he’s being stonewalled and the full set would vindicate his claims.
                                                                                                                                    • He wins the FOIA suit and receives a partial or null set with some sort of Glomar response or similar for why it wasn’t the full set, in which case he can say that he’s being stonewalled and the full set would vindicate his claims.

                                                                                                                                    Notice how in every possible outcome of the FOIA suit, the result is: “he can say that he’s being stonewalled and the full/real set of documents would vindicate his claims”. That’s an incredibly strong indicator that this FOIA suit cannot return any documents that would support the claims he’s making. Which means – in my opinion, at least – the suit itself is being presented disingenuously. If he wants to go FOIA stuff, by all means FOIA stuff. But it’s not going to provide any evidence for his claims, and in fact we can pre-write the likely followup regardless of the outcome of the FOIA.

                                                                                                                                    1. 3

                                                                                                                                      I was really only talking about the bit harping on about the “bribery” accusation (which I think was just really badly written hyperbole)

                                                                                                                                      1. 9

                                                                                                                                        Well, you’re right that technically Bernstein doesn’t ever come out and say the exact literal words “I accuse the NSA of bribing researchers”. But the point – and I think this is part of what Flilppo gets at – is that Bernstein’s employing dishonest rhetorical tactics in order to maintain a future claim of plausible deniability when it comes to explicit accusations, despite everyone being able to clearly read the implicit claims he wants us all to notice and take away from what he wrote.

                                                                                                                                        1. 2

                                                                                                                                          Yeah, that’s reasonable.

                                                                                                                                      2. 1

                                                                                                                                        Notice how in every possible outcome of the FOIA suit, the result is: “he can say that he’s being stonewalled and the full/real set of documents would vindicate his claims”

                                                                                                                                        What I’m noticing more is that you haven’t listened out every possible outcome. You’ve only listed scenarios that assume bad faith. Yet you don’t even have to assume good faith on his part to get to additional possible outcomes though. e.g. there’s another possibility where he wins in court, gets documents that show internal deliberations, and he claims that the evaluation has not all been public, as claimed by NIST. Even his detractors hang the value of the competition on the public nature of the evaluation. If everyone agrees that is a critical component then verifying it could be in good faith, even if his beliefs extend into the shadow that would be cast over the results.

                                                                                                                                        1. 2

                                                                                                                                          Wait, does NIST seriously claim that “all evaluation has been public”? That seems plainly impossible to be true. As a first counterexample, people not on the review board. Do you have anywhere this is actually stated?

                                                                                                                                          1. 1

                                                                                                                                            The language is certainly up for interpretation but Ctrl -F for “Transparency for NISTPQC” to read about his transparency motivations for the FOIA suit.

                                                                                                                                  1. 16

                                                                                                                                    This article was long enough they could have packed in some links to educational content for how to learn these dark arts.

                                                                                                                                    I recently ran into this program again while working on a daemon that needed to clean up it’s unix socket file when terminating. I used to handle this fairly well in my single-threaded C Linux programs back in the day (circa 1995-2005) just by adjusting the signal handlers. My recent experience echos the article though. In the intervening years the ubiquity of threads has changed the game. In today’s open source ecosystem where dependencies could spin up threads without you knowing it is particularly difficult.

                                                                                                                                    For my project I found the ctrlc rust crate. My first read through the code was “what is all of this garbage?” but as I re-read it I realized that (ignoring the cross-platform stuff) it is implementing the self-pipe pattern, which does indeed seem to be a reliable and minimal solution to the problem (and other related problems).

                                                                                                                                    1. 6

                                                                                                                                      If you’ve generally followed his work you can skip the first ~75%.

                                                                                                                                      1. 3

                                                                                                                                        BSD make is great for small projects which don’t have a lot of files and do not have any compile time option. For larger projects in which you want to enable/disable options at compilation time, you might have to use a more complete build system.

                                                                                                                                        Here’s the problem: Every large project was once a small project. The FreeBSD build system, which is built on top of bmake, is an absolute nightmare to use. It is slow, impossible to modify, and when it breaks it’s completely incomprehensible trying to find out why.

                                                                                                                                        For small projects, a CMake build system is typically 4-5 lines of CMake, so bmake isn’t really a win here, but CMake can grow a lot bigger before it becomes an unmaintainable mess and it’s improving all of the time. Oh, and it can also generate the compile_commands.json that your LSP implementation (clangd or whatever) uses to do syntax highlighting. I have never managed to make this work with bmake (@MaskRay published a script to do it but it never worked for me).

                                                                                                                                        1. 18

                                                                                                                                          The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

                                                                                                                                          Some of the “modern” cmake stuff is slightly less horrible. Maybe if the cmake community had moved on to using targets, things would’ve been a little better. But most of the time, you’re still stuck with ${FOO_INCLUDE_DIRS} and ${FOO_LIBRARIES}. And the absolutely terrible syntax and stringly typed nature won’t ever change.

                                                                                                                                          Give me literally any build system – including an ad-hoc shell script – over cmake.

                                                                                                                                          1. 6

                                                                                                                                            Agreed. Personally, I also detest meson/ninja in the same way. The only thing that I can tolerate writing AND using are BSD makefiles, POSIX makefiles, and plan9’s mkfiles

                                                                                                                                            1. 2

                                                                                                                                              You are going to have a very fun time dealing with portability. Shared libraries, anyone?

                                                                                                                                              1. 2

                                                                                                                                                Not really a problem, pkg-config tells your makefile what cflags and ldflags/ldlibs to add.

                                                                                                                                                1. 2

                                                                                                                                                  Using it is less the problem - creating shared libraries is much harder. Every linker is weird and special, even with ccld. As someone dealing with AIX in a dayjob…

                                                                                                                                            2. 5

                                                                                                                                              The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

                                                                                                                                              Yes. The last time I seriously used cmake for cross compiles (trying to build third-party non-android code to integrate into an Android app) I ended up knee deep in strace to figure out which of the hundreds of thousands of lines of cmake scripts were being included from the system cmake directory, and then using gdb on a debug build of cmake to try to figure out where it was constructing the incorrect strings, because I had given up on actually being able to understand the cmake scripts themselves, and why they were double concatenating the path prefix.

                                                                                                                                              Using make for the cross compile was merely quite unpleasant.

                                                                                                                                              Can we improve on make? Absolutely. But cmake is not that improvement.

                                                                                                                                              1. 2

                                                                                                                                                What were you trying to build? I have cross-compiled hundreds of CMake things and I don’t think I’ve ever needed to do anything other than give it a cross-compile toolchain file on the command line. Oh, and that was cross-compiling for an experimental CPU, so no off-the-shelf support from anything, yet CMake required me to write a 10-line text file and pass it on the command line.

                                                                                                                                                1. 2

                                                                                                                                                  This was in 2019-ish, so I don’t remember which of the ported packages it was. It may have been some differential equation packages, opencv, or some other packages. There was some odd interaction between their cmake files and the android toolchain’s cmake helpers that lead to duplicated build directory prefixes like:

                                                                                                                                                   /home/ori/android/ndk//home/ori/android/ndk/$filepath
                                                                                                                                                  

                                                                                                                                                  which was nearly impossible to debug. The fix was easy once I found the mis-expanded variable, but tracking it down was insanely painful. The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

                                                                                                                                                  1. 2

                                                                                                                                                    The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

                                                                                                                                                    The sad path with bmake is far sadder. I spent half a day trying to convince a bmake-based build system to compile the output from yacc as C++ instead of C before giving up. There was some magic somewhere but I have no idea where and a non-trivial bmake build system spans dozens of include files with syntax that looks like line noise. I’ll take add_target_option over ${M:asdfasdfgkjnerihna} any day.

                                                                                                                                                    1. 3

                                                                                                                                                      You’re describing the happy path.

                                                                                                                                                      Cmake ships with just over 112,000 lines of modules, and it seems any non trivial project gets between hundreds and thousands of lines of additional cmake customizations and copy-pasted modules on top of that. And if anything goes wrong in there, you need to get in and debug that code. In my experience, it often does.

                                                                                                                                                      With make, its usually easier to debug because there just isn’t as much crap pulled in. And even when there is, I can hack around it with a specific, ad-hoc target. With cmake, if something goes wrong deep inside it, I expect to spend a week getting it to work. And because I only touch cmake if I have to, I usually don’t have the choice of giving up – I just have to deal with it.

                                                                                                                                                      I’m very happy that these last couple years, I spend much of my paid time writing Go, and not dealing with other people’s broken build systems.

                                                                                                                                                      1. 1

                                                                                                                                                        Cmake ships with just over 112,000 lines of modules, and it seems any non trivial project gets between hundreds and thousands of lines of additional cmake customizations and copy-pasted modules on top of that.

                                                                                                                                                        The core bmake files are over 10KLoC, which doesn’t include the built-in rules, and do far less than the CMake standard library (which includes cross compilation, finding dependencies using various tools, and so on). They are not namespaced, because bmake does not have any notion of scopes for variables, and so any one of them may define some variable that another consumes and

                                                                                                                                                        With make, its usually easier to debug because there just isn’t as much crap pulled in.

                                                                                                                                                        That is not my experience with any large project that I’ve worked on with a bmake or GNU make build system. They build some half-arsed analogue of a load of the CMake modules and, because there’s no notion of variable scope in these systems, everything depends on some variable that is set somewhere in a file that’s included at three levels of indirection by the thing that includes the Makefile for the component that you’re currently looking at. Everything is spooky action at a distance. You can’t find the thing that’s setting the variable, because it’s constructing the variable name by applying some complex pattern to the string. When I do find it, instead of functions with human-readable names, I discover that it’s a line like _LDADD_FROM_DPADD= ${DPADD:R:T:C;^lib(.*)$;-l\1;g} (actual line from a bmake project, far from the worst I’ve seen, just the first one that jumped out opening a random .mk file), which is far less readable than anything I’ve ever read in any non-Perl language.

                                                                                                                                                        In contrast, modern CMake has properties on targets and the core modules are work with this kind of abstraction. There are a few places where some global variables still apply, but these are easy to find with grep. Everything else is scoped. If a target is doing something wrong, then I need to look at how that target is constructed. It may be as a result of some included modules, but finding they relevant part is usually easy.

                                                                                                                                                        The largest project that I’ve worked on with a CMake build system is LLVM, which has about 7KLoC of custom CMake modules. It’s not wonderful, but it’s far easier to modify the build system than I’ve found for make-based projects a tenth the size. The total time that I’ve wasted on CMake hacking for it over the last 15 years is less than a day. The time I’ve wasted failing to get Make-based (GNU Make or bmake) projects to do what I want is weeks over the same period.

                                                                                                                                              2. 3

                                                                                                                                                Modern CMake is a lot better and it’s being aggressively pushed because things like vcpkg require modern CMake, or require you to wrap your crufty CMake in something with proper exported targets. Importing external dependencies.

                                                                                                                                                I’ve worked on projects with large CMake infrastructure, large GNU make infrastructure, and large bmake infrastructure. I have endured vastly less suffering as a result of the CMake infrastructure than the other two. I have spent entire days trying to change things in make-based build systems and given up, whereas CMake I’ve just complained about how ugly the macro language is.

                                                                                                                                                1. 2

                                                                                                                                                  Would you be interested to try build2? I am willing to do some hand-holding (e.g., answer “How do I ..?” questions, etc) if that helps.

                                                                                                                                                  To give a few points of comparison based on topics brought up in other comments:

                                                                                                                                                  1. The simple executable buildfile would be a one-liner like this:

                                                                                                                                                    exe{my-prog}: c{src1} cxx{src2}
                                                                                                                                                    

                                                                                                                                                    With the libzstd dependency:

                                                                                                                                                    import libs = libzstd%lib{zstd}
                                                                                                                                                    
                                                                                                                                                    exe{my-prog}: c{src1} cxx{src2} $libs
                                                                                                                                                    
                                                                                                                                                  2. Here is a buildfile from a library (Linux Kconfig configuration system) that uses lex/yacc: https://github.com/build2-packaging/kconfig/blob/master/liblkc/liblkc/buildfile

                                                                                                                                                  3. We have a separate section in the manual on the available build debugging mechanisms: https://build2.org/build2/doc/build2-build-system-manual.xhtml#intro-diag-debug

                                                                                                                                                  4. We have a collection of HOWTOs that may be of interest: https://github.com/build2/HOWTO/#readme

                                                                                                                                                  1. 3

                                                                                                                                                    I like the idea of build2. I was hoping for a long time that Jon Anderson would finish Fabrique, which had some very nice properties (merging of objects for inheriting flags, a file type in the language that was distinct from a string and could be mapped to a path or a file descriptor on invocation).

                                                                                                                                                    exe{my-prog}: c{src1} cxx{src2}

                                                                                                                                                    Perhaps it’s just me, but I really don’t find that to be great syntax. Software in general (totally plausible rule of thumb that I was told and believe) is read around 10 times more than it is written. For build systems, that’s probably closer to 100, so terse syntax scares me.

                                                                                                                                                    The problem I have now is ecosystem lock in. 90% of the things that I want to depend on provides a CMake exported project. I can use vcpkg to grab thousands of libraries to statically link against and everything just works. From this example:

                                                                                                                                                    With the libzstd dependency:

                                                                                                                                                    import libs = libzstd%lib{zstd}

                                                                                                                                                    How does it find zstd? Does it rely on an export target that zstd exposed, a built-in package, or some other mechanism?

                                                                                                                                                    CMake isn’t what I want, but I can see a fairly clear path to evolving it to be what I want. I don’t see that path for replacing it with something new and for the new thing to be worth replacing CMake it would need to be an order of magnitude better for my projects and able to consume CMake exported targets from other projects (not pkg-config, which can’t even provide flags for compiler invocations for Objective-C, let alone handle any of the difficult configuration cases). If it can consume CMake exported targets, then my incentive for libraries is to use CMake because then I can export a target that both it and CMake can consume.

                                                                                                                                                    1. 2

                                                                                                                                                      Perhaps it’s just me, but I really don’t find that to be great syntax. Software in general (totally plausible rule of thumb that I was told and believe) is read around 10 times more than it is written. For build systems, that’s probably closer to 100, so terse syntax scares me.

                                                                                                                                                      No, it’s not just you, this is a fairly common complaint from people who first see it but interestingly not from people who used build2 for some time (we ran a survey). I believe the terse syntax is beneficial for common constructs (and what I’ve shown is definitely one of the most common) because it doesn’t get in the way when trying to understand more complex buildfiles. At least this has been my experience.

                                                                                                                                                      How does it find zstd? Does it rely on an export target that zstd exposed, a built-in package, or some other mechanism?

                                                                                                                                                      That depends on whether you are using just the build system or the build system and the package manager stack. If just the build system, then you can either specify the development build to import explicitly (e.g., config.import.libzstd=/tmp/libzstd), bundle it with your project (in which it gets found automatically) or, failed all of the above, build2 will try to find the installed version (and extract additional options/libraries from pkg-config files, if any).

                                                                                                                                                      If you are using the package manager, then by default it will download and build libzstd from the package (but you can also instruct the package manager to use the system-installed version if you prefer). We happen to have the libzstd package sitting in the submission queue: https://queue.cppget.org/libzstd

                                                                                                                                                      But that’s a pretty vanilla case that most tools can handle these days. The more interesting one is lex/yacc from the buidfile I linked. It uses the same import mechanism to find the tools:

                                                                                                                                                      import! [metadata] yacc = byacc%exe{byacc}
                                                                                                                                                      import! [metadata] flex = reflex%exe{reflex}
                                                                                                                                                      

                                                                                                                                                      And we have them packaged: https://cppget.org/reflex and https://cppget.org/byacc. And the package manager will download and build them for you. And it’s smart enough to know to do it in a seperate host configuration so that they can still be executed during the build even if you are cross-compiling. This works auto-magiclaly, even on Windows. (Another handy tool that can be used like that is xxd: https://cppget.org/xxd).

                                                                                                                                                      CMake isn’t what I want, but I can see a fairly clear path to evolving it to be what I want. I don’t see that path for replacing it with something new and for the new thing to be worth replacing CMake it would need to be an order of magnitude better for my projects.

                                                                                                                                                      I am clearly biased but I think it’s actually not that difficult to be an order of magnitude better than CMake, it’s just really difficult to see if all you’ve experienced is CMake (and maybe some make-based projects).

                                                                                                                                                      Firstly, CMake is a meta build system which closes the door on quite a few things (for an example, check how CMake plans to support C++20 modules; in short it’s a “let’s pre-scan the world” approach). Then, on one side of this meta build system sandwich you have a really primitive build model with the famous CMake macro language. On the other you have the lowest common denominator problem of the underlying build systems. Even arguably the best of them (ninja) is quite a basic tool. The result is that every new functionality, say support for a new source code generator, has to be implemented in this dreaded macro language with an eye on the underlying build tools. In build2, in contrast, you can implement you own build system module in C++ and the toolchain will fetch, build, and load it for you automatically (pretty much the same as the lex/yacc tools above). Here is a demo I’ve made of a fairly elaborate source code generator setup for a user (reportedly it took a lot of hacking around to support in CMake and was the motivation for them to switch to build2): https://github.com/build2/build2-dynamic-target-group-demo/

                                                                                                                                                      1. 3

                                                                                                                                                        No, it’s not just you, this is a fairly common complaint from people who first see it but interestingly not from people who used build2 for some time (we ran a survey)

                                                                                                                                                        That’s a great distinction to make. Terse syntax is fine for operations that I will read every time I look in the file, but it’s awful for things that I’ll see once every few months. I don’t know enough about build2 to comment on where it falls on this spectrum.

                                                                                                                                                        For me, the litmus test of a build systems is one that is very hard to apply to new ones: If I want to modify a build system for a large project that has aggregated for 10-20 years, how easy is it for me to understand their custom parts? CMake is not wonderful here, but generally the functions and macros are easy to find and to read once I’ve found them. bmake is awful because its line-noise syntax is impossible to search for (how do you find what the M modifier in an expression does in the documentation? “M” as a search string gives a lot of false positives!).

                                                                                                                                                        That depends on whether you are using just the build system or the build system and the package manager stack. If just the build system, then you can either specify the development build to import explicitly (e.g., config.import.libzstd=/tmp/libzstd), bundle it with your project (in which it gets found automatically) or, failed all of the above, build2 will try to find the installed version (and extract additional options/libraries from pkg-config files, if any).

                                                                                                                                                        My experience with pkg-config is not very positive. It just about works for trivial options but is not sufficiently expressive for even simple things like different flags for debug and release builds, let alone anything with custom configuration options.

                                                                                                                                                        If you are using the package manager, then by default it will download and build libzstd from the package (but you can also instruct the package manager to use the system-installed version if you prefer). We happen to have the libzstd package sitting in the submission queue: https://queue.cppget.org/libzstd

                                                                                                                                                        That looks a lot more promising, especially being able to use the system-installed version. Do you provide some ontology that allows systems to map build2 package names to installed packages so that someone packaging a project that I build with build2 without having to do this translation for everything that they package?

                                                                                                                                                        And we have them packaged: https://cppget.org/reflex and https://cppget.org/byacc. And the package manager will download and build them for you. And it’s smart enough to know to do it in a seperate host configuration so that they can still be executed during the build even if you are cross-compiling. This works auto-magiclaly, even on Windows. (Another handy tool that can be used like that is xxd: https://cppget.org/xxd).

                                                                                                                                                        This is a very nice property, though one that I already get from vcpkg + CMake.

                                                                                                                                                        Firstly, CMake is a meta build system which closes the door on quite a few things (for an example, check how CMake plans to support C++20 modules; in short it’s a “let’s pre-scan the world” approach). Then, on one side of this meta build system sandwich you have a really primitive build model with the famous CMake macro language.

                                                                                                                                                        The language is pretty awful, but the underlying object model doesn’t seem so bad and is probably something that could be exposed to another language with some refactoring (that’s probably the first thing that I’d want to do if I seriously spent time trying to improve CMake).

                                                                                                                                                        In build2, in contrast, you can implement you own build system module in C++ and the toolchain will fetch, build, and load it for you automatically (pretty much the same as the lex/yacc tools above). Here is a demo I’ve made of a fairly elaborate source code generator setup for a user (reportedly it took a lot of hacking around to support in CMake and was the motivation for them to switch to build2):

                                                                                                                                                        That’s very interesting and might be a good reason to switch for a project that I’m currently working on.

                                                                                                                                                        I have struggled in the past with generated header files with CMake, because the tools can build the dependency edges during the build, but I need a coarse-grained rule for the initial build that says ‘do the step that generates these headers before trying to build this target’ and there isn’t a great way of expressing that this is a fudge and so I can break that arc for incremental builds. Does build2 have a nice model for this kind of thing?

                                                                                                                                                        1. 2

                                                                                                                                                          If I want to modify a build system for a large project that has aggregated for 10-20 years, how easy is it for me to understand their custom parts?

                                                                                                                                                          In build2, there are two ways to do custom things: you can write ad hoc pattern rules in a shell-like language (similar to make pattern rules, but portable and higher-level) and everything else (more elaborate rules, functions, configuration, etc) is written in C++(14). Granted C++ can be made an inscrutable mess, but at least it’s a known quantity and we try hard to keep things sane (you can get a taste of what that looks like from the build2-dynamic-target-group-demo/libbuild2-compiler module I linked to earlier).

                                                                                                                                                          My experience with pkg-config is not very positive. It just about works for trivial options but is not sufficiently expressive for even simple things like different flags for debug and release builds, let alone anything with custom configuration options.

                                                                                                                                                          pkg-config has its issues, I agree, plus most build systems don’t (or can’t) use it correctly. For example, you wouldn’t try to cram both debug and release builds into a single library binary (e.g., .a or .so; well, unless you are Apple, perhaps) so why try to cram both debug and release (or static/shared for that matter) options into the same .pc file?

                                                                                                                                                          Plus, besides the built-in values (Cflags, etc), pkg-config allows for free-form variables. So you can extend the format how you see fit. For example, in build2 we use the bin.whole variable to signal that the library should be linked in the “whole archive” mode (which we then translate into the appropriate linker options). Similarly, we’ve used pkg-config variable to convey C++20 modules information and it also panned out quite well. And we now convey custom C/C++ library metadata this way.

                                                                                                                                                          So the question is do we subsume all the existing/simple cases and continue with pkg-config by extending its format for more advanced cases or do we invent a completely new format (which is what WG21’s SG15 is currently trying to do)?

                                                                                                                                                          Do you provide some ontology that allows systems to map build2 package names to installed packages so that someone packaging a project that I build with build2 without having to do this translation for everything that they package?

                                                                                                                                                          Not yet, but we had ideas along these lines though in a different direction: we were thinking of each build2 package also providing a mapping to the system package names for the commonly used distributions (e.g., libzstd-dev for Debain/Ubuntu, libzstd-devel for Fedora/etc) so that the build2 package manager can query the installed package’s version (e.g., to make sure the version constraints are satisfied) or to invoke the system package manager to install the system package. If we had such a mapping, it would also allow us to also achieve what you are describing.

                                                                                                                                                          This is a very nice property, though one that I already get from vcpkg + CMake.

                                                                                                                                                          Interesting. So you could ask vcpkg to build you a library without even knowing it has build-time dependencies on some tools and vcpkg will automatically create a suitable host configuration, build those tools there, and pass them to the library’s so that it can execute them during its build?

                                                                                                                                                          If so, that’s quite impressive. For us, the “create a suitable host configuration” part turned into a particularly deep rabbit hold. What is “suitable”? In our case we’ve decided to use the same compiler/options as what was used to build build2. But what if the PATH environment variable has changed and now clang++ resolves to something else? So we had to invent a notion of hermetic build configurations where we save all the environment variables that affect every tool involved in the build (like CPATH and friends). One nice off-shot of this work is that now in non-hermetic build configurations (which are the default), we detect changes to the environment variables besides everything else (sources, options, compiler versions, etc).

                                                                                                                                                          I have struggled in the past with generated header files with CMake, because the tools can build the dependency edges during the build, but I need a coarse-grained rule for the initial build that says ‘do the step that generates these headers before trying to build this target’ and there isn’t a great way of expressing that this is a fudge and so I can break that arc for incremental builds. Does build2 have a nice model for this kind of thing?

                                                                                                                                                          Yes, in build2 you normally don’t need any fudging, the C/C++ compile rules are prepared to deal with generated headers (via -MG or similar). There are use-cases where it’s impossible to handle the generated headers fully dynamically (for example, because the compiler may pick up a wrong/outdated header from another search path) but this is also taken care of. See this article for the gory details: https://github.com/build2/HOWTO/blob/master/entries/handle-auto-generated-headers.md

                                                                                                                                                          That’s very interesting and might be a good reason to switch for a project that I’m currently working on.

                                                                                                                                                          As I mentioned earlier, I would be happy to do some hand-holding if you want to give it a try. Also, build2 is not exactly simple and has a very different mental model compared to CMake. In particular, CMake is a “mono-repo first” build system while build2 is decidedly “multi-repo first”. As a result, some things that are often taken as gospel by CMake users (like the output being a subdirectory of the source directory) is blasphemy in build2. So there might be some culture shock.

                                                                                                                                                          BTW, in your earlier post you’ve mentioned Fabrique by Jon Anderson but I can’t seem to find any traces of it. Do you have any links?

                                                                                                                                                          1. 2

                                                                                                                                                            Granted C++ can be made an inscrutable mess, but at least it’s a known quantity and we try hard to keep things sane (you can get a taste of what that looks like from the build2-dynamic-target-group-demo/libbuild2-compiler module I linked to earlier).

                                                                                                                                                            This makes me a bit nervous because it seems very easy for non-portable things to creep in with this. To give a concrete example, if my build environment is a cloud service then I may not have a local filesystem and anything using the standard library for file I/O will be annoying to port. Similarly, if I want to use something like Capsicum to sandbox my build then I need to ensure that descriptors for files read by these modules are provided externally.

                                                                                                                                                            It looks as if the abstractions there are fairly clean, but I wonder if there’s any way of linting this. It would be quite nice if this could use WASI as the host interface (even if compiling to native code) so that you had something that at least can be made to run anywhere.

                                                                                                                                                            pkg-config has its issues, I agree,

                                                                                                                                                            My bias against pkg-config originates from trying to use it with Objective-C. I gave up trying to add an --objc-flags and –objcxx-flags` option because the structure of the code made this kind of extension too hard. Objective-C is built with the same compiler as C/C++ and takes mostly the same options, yet it wasn’t possible to support. This made me very nervous that the system could adapt to any changes in requirements from C/C++ and no chance of providing information for any other language. This was about 15 years ago, so it may have improved since thne.

                                                                                                                                                            Not yet, but we had ideas along these lines though in a different direction: we were thinking of each build2 package also providing a mapping to the system package names for the commonly used distributions

                                                                                                                                                            That feels back to front because you’re traversing the graph in the opposite direction to the edge that must exist. Someone packaging libFoo for their distribution must know where libFoo comes from and so is in a position to maintain this mapping (we could fairly trivially automate it from the FreeBSD ports system for any package that we build from a cppget source, for example). In contrast, the author of a package doesn’t always know where things come from here. I’ve looked on repology at some of my code and discovered that I haven’t even heard of a load of the distributions that package it, so expecting me to maintain a list of those (and keep it up to date with version information) sounds incredibly hard and likely to lead to a two-tier system (implicit in your use of the phrase ‘commonly used distributions’) where building on Ubuntu and Fedora is easy, building on less-popular targets is harder.

                                                                                                                                                            Interesting. So you could ask vcpkg to build you a library without even knowing it has build-time dependencies on some tools and vcpkg will automatically create a suitable host configuration, build those tools there, and pass them to the library’s so that it can execute them during its build?

                                                                                                                                                            Yes, but there’s a catch: vcpkg runs its builds as part of the configure stage, not as part of the build stage. This means that running cmake may take several minutes, when then running ninja completes in a second or two. If you modify vcpkg.json then this will force CMake to re-run and that will cause the packages to re-build. vcpkg packages have a notion of host tools, which are built with the triplet for your host configuration and are then exposed for the rest of the build. There are some known issues with it, so they might be starting down the same rabbit hole that you ended up with.

                                                                                                                                                            Yes, in build2 you normally don’t need any fudging, the C/C++ compile rules are prepared to deal with generated headers (via -MG or similar).

                                                                                                                                                            It’s the updating that I’m particularly interested in. Imagine that I have I have a make-headers build step that has sub-targets that generate foo.h and bar.h and then a I step for compiling prog.cc, which includes foo.h. On the first (non-incremental) build, I want the compile step that consumes prog.cc to depend on make-headers (big hammer, so that I don’t have to track which generated headers my prog.cc depends on). But after that I want the compiler to update the rule for prog.cc so that it depends only on foo.h. I’ve managed to produce some hacks that do this in CMake but they’re ugly and fragile. I’d love to have some explicit support for over-approximate dependencies that will be fixed during the first build. bmake’s meta mode does this by using a kernel module to watch the files that the compiler process reads and dynamically updating the build rules to depend on those. This has some nice side effects, such as causing a complete rebuild if you upgrade your compiler or a shared library that the compiler depends on.

                                                                                                                                                            Negative dependencies are a separate (and more painful problem).

                                                                                                                                                            As I mentioned earlier, I would be happy to do some hand-holding if you want to give it a try. Also, build2 is not exactly simple and has a very different mental model compared to CMake. In particular, CMake is a “mono-repo first” build system while build2 is decidedly “multi-repo first”. As a result, some things that are often taken as gospel by CMake users (like the output being a subdirectory of the source directory) is blasphemy in build2. So there might be some culture shock.

                                                                                                                                                            All of my builds are done from a separate ZFS dataset that has sync turned off, so out-of-tree builds are normal for me, but I’ve not had any problems with that in CMake. One of the projects that I’m currently working on looks quite a lot like a cross-compile SDK and so build2 might be a good fit (we provide some build tools and components and want consumers to pick up our build system components). I’ll do some reading and see how hard it would be to port it over to build2. It’s currently only about a hundred lines of CMake, so not so big that a complete rewrite would be painful.

                                                                                                                                                            1. 1

                                                                                                                                                              This makes me a bit nervous because it seems very easy for non-portable things to creep in with this.

                                                                                                                                                              These are interesting points that admittedly we haven’t though much about, yet. But there are plans to support distributed compilation and caching which, I am sure, will force us to think this through.

                                                                                                                                                              One thing that I have been thinking about lately is how much logic should we allow one to put in a rule (since, being written in C++, there is not much that cannot be done). In other words, should rules be purely glue between the build system and the tools that do the actual work (e.g., generate some source code) or should we allow the rules to do the work themselves without any tools? To give a concrete example, it would be trivial in build2 to implement a rule that provides the xxd functionality without any external tools.

                                                                                                                                                              Either way I think the bulk of the rules will still be the glue type simply because nobody will want to re-implement protoc or moc directly in the rule. Which means the problem is actually more difficult: it’s not just the rules that you need to worry about, it’s also the tools. I don’t think you will easily convince many of them to work without a local filesystem.

                                                                                                                                                              That feels back to front because you’re traversing the graph in the opposite direction to the edge that must exist. Someone packaging libFoo for their distribution must know where libFoo comes from and so is in a position to maintain this mapping […]

                                                                                                                                                              From this point of view, yes. But consider also this scenario: whomever is packaging libFoo for, say, Debian is not using build2 (because libFoo upstream is, say, still uses CMake) and so has no interest in maintaining this mapping.

                                                                                                                                                              Perhaps this should just be a separate registry where any party (build2 package author, distribution package author, or an unrelated third party) can contribute the mapping. This will work fairly well for archive-based package repositories where we can easily merge this information into the repository metadata. But not so well for git-based where things are decentralized.

                                                                                                                                                              Imagine that I have I have a make-headers build step that has sub-targets that generate foo.h and bar.h and then a step for compiling prog.cc, which includes foo.h. On the first (non-incremental) build, I want the compile step that consumes prog.cc to depend on make-headers (big hammer, so that I don’t have to track which generated headers my prog.cc depends on). But after that I want the compiler to update the rule for prog.cc so that it depends only on foo.h.

                                                                                                                                                              You don’t need such “big hammer” aggregate steps in build2 (unless you must, for example, because the tool can only product all the headers at once). Here is a concrete example:

                                                                                                                                                              hxx{*}: extension = h
                                                                                                                                                              
                                                                                                                                                              cxx.poptions += "-I$out_base" "-I$src_base"
                                                                                                                                                              
                                                                                                                                                              gen = foo.h bar.h
                                                                                                                                                              
                                                                                                                                                              ./: exe{prog1}: cxx{prog1.cc} hxx{$gen}
                                                                                                                                                              ./: exe{prog2}: cxx{prog2.cc} hxx{$gen}
                                                                                                                                                              
                                                                                                                                                              hxx{foo.h}:
                                                                                                                                                              {{
                                                                                                                                                                echo '#define FOO 1' >$path($>)
                                                                                                                                                              }}
                                                                                                                                                              
                                                                                                                                                              hxx{bar.h}:
                                                                                                                                                              {{
                                                                                                                                                                echo '#define BAR 1' >$path($>)
                                                                                                                                                              }}
                                                                                                                                                              

                                                                                                                                                              Where prog1.cc looks like this (in prog2.cc substitute foo with bar):

                                                                                                                                                              #include "foo.h"
                                                                                                                                                              
                                                                                                                                                              int main ()
                                                                                                                                                              {
                                                                                                                                                                return FOO;
                                                                                                                                                              }
                                                                                                                                                              

                                                                                                                                                              While this might look a bit impure (why does exe{prog1} depends on bar.h even though none of its sources use it), this works as expected. In particular, given a fully up-to-date build, if you remove foo.h, only exe{prog1} will be rebuilt. The mental model here is that the headers you list as prerequisites of an executable or library are a “pool” from which its source can “pick” what they need.

                                                                                                                                                              I’ll do some reading and see how hard it would be to port it over to build2. It’s currently only about a hundred lines of CMake, so not so big that a complete rewrite would be painful.

                                                                                                                                                              Sounds good. If this is public (or I can be granted access), I could even help.

                                                                                                                                                              1. 1

                                                                                                                                                                Either way I think the bulk of the rules will still be the glue type simply because nobody will want to re-implement protoc or moc directly in the rule. Which means the problem is actually more difficult: it’s not just the rules that you need to worry about, it’s also the tools. I don’t think you will easily convince many of them to work without a local filesystem.

                                                                                                                                                                That’s increasingly a problem. There was a post here a few months back where someone had built clang as an AWS Lambda. I expect a lot of tools in the future will end up becoming things that can be deployed on FaaS platforms and then you really want the build system to understand how to translate between two namespaces (for example, to provide a compiler with a json dictionary of name to hash mappings for a content-addressable filesytem).

                                                                                                                                                                I forgot to provide you with a link to Farbique last time. I worked a bit on the design but never had time to do much implementation and Jon got distracted by other projects. We wanted to be able to run tools in Capsicum sandboxes (WASI picked up the Capsicum model, so the same requirements would apply to a WebAssembly/WASI FaaS service): the environment is responsible for opening files and providing descriptors into the tool’s world. This also has the nice property for a build system that the dependencies are, by construction, accurate: anything where you didn’t pass in a file descriptor is not able to be accessed by the task (though you can pass in directory descriptors for include directories as a coarse over approximation).

                                                                                                                                                                From this point of view, yes. But consider also this scenario: whomever is packaging libFoo for, say, Debian is not using build2 (because libFoo upstream is, say, still uses CMake) and so has no interest in maintaining this mapping.

                                                                                                                                                                I don’t think that person has to care, the person packaging something using libFoo needs to care and that creates an incentive for anyone packaging C/C++ libraries to keep the mapping up to date. I’d imagine that each repo would maintain this mapping. That’s really the only place where I can imagine that it can live without getting stale.

                                                                                                                                                                I’m more familiar with the FreeBSD packaging setup than Debian, so there may be some key differences. FreeBSD builds a new package set from the top of the package tree every few days. There’s a short lag (typically 1-3 days) between pushing a version bump to a port and users seeing the package version. Some users stay on the quarterly branch, which is updated less frequently. If I create a port for libFoo v1.0, then it will appear in the latest package set in a couple of days and, if I time it right, in the quarterly one soon after. Upstream libFoo notices and updates their map to say ‘FreeBSD has version 1.0 and it’s called libfoo`. Now I update the port to v1.1. Instantly, the upstream mapping is wrong for anyone who is building package sets themselves. A couple of days later, it’s wrong for anyone installing packages from the latest branch. A few weeks later, it’s wrong for anyone on the quarterly branch. There is no point at which the libFoo repo can hold a map that is correct for everyone unless they have three entries for FreeBSD, and even then they need to actively watch the status of builders to get it right.

                                                                                                                                                                In contrast, if I add a BUILD2_PACKAGE_NAME= and BUILD2_VERSION= line to my port (the second of which can default to the port version, so needs setting in a few corner cases), then it’s fairly easy to add some generic infrastructure to the ports system that builds a complete map for every single packaged library when you build a package set. This will then always be 100% up to date, because anyone changing a package will implicitly update it. I presume that the Debian package builders could do something similar with something in the source package manifest.

                                                                                                                                                                Note that the mapping needs to contain versions as well as names because the version in the package often doesn’t directly correspond to the upstream version. This gets especially tricky when the packaged version carries patches that are not yet upstreamed.

                                                                                                                                                                Oh, and options get more fun here. A lot of FreeBSD ports can build different flavours depending on the options that are set when building the package set. This needs to be part of the mapping. Again, this is fairly easy to drive from the port description but an immense amount of pain for anyone to try to generate from anywhere else. My company might be building a local package set that disables (or enables) an option that is the default upstream, so when I build something that uses build2 I may need to statically link a version of some library rather than using the system one, even though the default for a normal FreeBSD user would be to just depend on the package.

                                                                                                                                                                While this might look a bit impure (why does exe{prog1} depends on bar.h even though none of its sources use it), this works as expected. In particular, given a fully up-to-date build, if you remove foo.h, only exe{prog1} will be rebuilt. The mental model here is that the headers you list as prerequisites of an executable or library are a “pool” from which its source can “pick” what they need.

                                                                                                                                                                That is exactly what I want, nice! It feels like a basic thing for a C/C++ build system, yet it’s something I’ve not seen well supported anywhere else.

                                                                                                                                                                Sounds good. If this is public (or I can be granted access), I could even help.

                                                                                                                                                                It isn’t yet, hopefully later in the year…

                                                                                                                                                                Of course, the thing I’d really like to do (if I ever find myself with a few months of nothing to do) is replace the awful FreeBSD build system with something tolerable and it looks like build2 would be expressive enough for that. It has some fun things like needing to build the compiler that it then uses for later build steps, but it sounds as if build2 was designed with that kind of thing in mind.

                                                                                                                                              3. 2

                                                                                                                                                Not all small projects will necessarily grow into a large project. The trick is recognizing when or if the project will outgrow its infrastructure. Makefiles have a much lower conceptual burden, because Makefiles very concretely describe how you want your build system to run; but they suffer when you try to add abstractions to them, to support things like different toolchains, or creating the compilation database (I assume you’ve seen bear?). If you need your build described more abstractly (like, if you need to do different things with the dependency tree than simply build), then a different build tool will work better for you. But it can be hard to understand what the build tool is actually doing, and how it decided to do it. There’s no global answer.

                                                                                                                                                1. 4

                                                                                                                                                  This is the CMake file that you need for a trivial C/C++ project:

                                                                                                                                                  cmake_minimum_required(VERSION 3.20)
                                                                                                                                                  add_executable(my-prog src1.c src2.cc)
                                                                                                                                                  

                                                                                                                                                  That’s it. That gives you targets to make my-prog, to clean the build, and will work on Windows, *NIX, or any other system that has a vaguely GCC or MSVC-like toolchain, supports debug and release builds, and generates a compile_commands.json for my editor to consume. If I want to add a dependency, let’s say on zstd, then it becomes:

                                                                                                                                                  find_package(zstd CONFIG REQUIRED)
                                                                                                                                                  cmake_minimum_required(VERSION 3.20)
                                                                                                                                                  add_executable(my-prog src1.c src2.cc)
                                                                                                                                                  target_link_libraries(my-prog PRIVATE zstd::libzstd_static)
                                                                                                                                                  

                                                                                                                                                  This will work with system packages, or with something like vcpkg installing a local copy of a specific version for reproduceable builds.

                                                                                                                                                  Even for a simple project, the equivalent bmake file is about as complex and won’t let you target something like AIX or Windows without a lot more work, doesn’t support cross-compilation without some extra hoop jumping, and so on.

                                                                                                                                                  1. 1

                                                                                                                                                    The common Makefile for this use case will be more lines of code (I never use bsd.prog.mk, etc., unless I’m actually working on the OS), but I think the word “complex” here obscures something important: that a Makefile can be considered simpler due to a very simple execution model, or a CMakeLists.txt can be considered simpler since it describes the compilation process more abstractly, allowing it to do a lot more with less.

                                                                                                                                                    For an example of why I think Makefile‘s are conceptually simpler, it is just as easy to use a Makefile with custom build tools as it is to compile C code. It’s much easier to understand:

                                                                                                                                                    %.c : %.precursor
                                                                                                                                                        python my_tool.py %< -o $@
                                                                                                                                                    

                                                                                                                                                    than it is to figure out how to use https://cmake.org/cmake/help/latest/command/add_custom_command.html to similar effect; or to try to act like a first-class citizen, and make add_executable to work with .precursor files.

                                                                                                                                                2. 2

                                                                                                                                                  CMake gets a lot of criticism, but I think a fair share of its problems it’s just that people haven’t stopped to learn about the tool. It’s a second-class language for some people, just like CSS.

                                                                                                                                                  1. 2

                                                                                                                                                    There’s an association issue here too. Compiling C++ sucks. It is significantly trickier than many other languages. The dependency ecosystem is far less automated too. Many dependencies are incorporated into a conglomerate project. The build needs of those depdencies come along for the ride. The problems with all of these constituents are exposed as a symptom of the top line build utility for the parent project. If cmake had made it’s first inroads with another language it would likely have a more nuanced reputation. Not that it doesn’t bring its own problems too, but it surely takes the blame for a lot of C++’s problems.

                                                                                                                                                1. 4

                                                                                                                                                  I hear about Hanami for the first time. I’ve browsing the website for a bit to figure out what it is about, but I’m having a really hard time. A lot of the content/text on both the website and the introduction basically says nothing.

                                                                                                                                                  It’s evidently a web framework.

                                                                                                                                                  There is claims like “Full-featured, but lightweight”, where it claims that it uses less memory than other frameworks, no information, which, why, how and under what circumstances.

                                                                                                                                                  There’s also a claim “Simple and productive” describing how you can just start writing code and in Getting Started it starts with “this learning process can be hard” in bold.

                                                                                                                                                  The “Download, Develop, Deploy in 5 minutes.” in “Develop” just tells me how to add a dependency and how to commit all contents of a Directory.

                                                                                                                                                  Overall, given that Ruby has no shortage of web frameworks I didn’t find any information that sets it apart.

                                                                                                                                                  1. 7

                                                                                                                                                    Hanami 2.0 is significant because it is basically a complete rewrite that uses dry-rb as the foundations of the framework.

                                                                                                                                                    At a very high level, think of this as a web framework heavily inspired by FP rather than Smalltalk or Java. It emphasizes immutable data structures and function composition.

                                                                                                                                                    The most important, fundamental abstraction is dry-types. Instead of ad-hoc, procedural type checking, you instead write type objects with an expressive DSL that produces composable functions. These functions are understood by the rest of the system as constraints.

                                                                                                                                                    Building on types, dry-validation is an expressive DSL for validating complex Hash structures like JSON data. Schemas contain keys and type values, and can be composed together. Validations encapsulate schemas with more complex rule logic. Keys and values are coerced into a standardized format as a byproduct of validation.

                                                                                                                                                    Validated data is represented as immutable struct objects. Structs don’t get instantiated until they are validated, so you never represent struct data in an invalid form.

                                                                                                                                                    Hanami uses ROM as the persistence layer, which follows all of these principles as well: your database structure is not tightly coupled to your domain objects and there are abstractions in place to make moving between them easy. Your domain objects are immutable, dumb values and your persistence logic lives in an entirely difference place. You can write Repository objects to be as simple or complex as you need. Querying is a Relation object separate from everything else but easily accessible within Repository.

                                                                                                                                                    Business logic is expressed as functional Operation objects that share a result type via dry-monads. Operations have a very convenient “do notation” syntax that unwraps monads with yield, flattening out results from other functions into a top-to-bottom execution that halts on failure without dealing with the downsides of exceptions. Monads are very convenient wrappers to pattern-match against.

                                                                                                                                                    All dependencies in the system are addressable via dry-container as a string key, and are injected as arguments using dry-auto_inject. This means that your Operation classes act like IoC containers, and the instances they produce act as functions. The container system understands how to construct your function objects, so you never have to do it manually.

                                                                                                                                                    Your containers can be split up according to business domain as Slices. These slices are testable in isolation from one another, providing a clean separation of concerns without dealing with the overhead of e.g. Rails Engines.

                                                                                                                                                    The project structure itself it provided by dry-system, so you have complete control over every aspect of how your code gets loaded and where it lives. Maybe one slice is more convenient to develop with code reloading, but another slice can just be required once. That is under your control.

                                                                                                                                                    Environment settings are strictly type-checked with the same dry-types objects you use everywhere else.

                                                                                                                                                    Providers are kind of like Rails initializers, except they can be loaded from gems and they describe a dependency graph between them. This allows you to cleanly instantiate your system dependencies without doing dumb stuff like prefixing initializer files with numbers to get them to load in a particular order.

                                                                                                                                                    1. 3

                                                                                                                                                      Thank you! :)

                                                                                                                                                      Would be great to make that information available on the website. I know it’s a bit out of fashion to having meaningful, informative websites rather than generic headings, seemingly random claims, fancy words and “who is using” lists, but I think that I might not be the only one, who’d prefer actual information upfront.

                                                                                                                                                      In best case, like good man pages there should even be short-comings or at least non-goals.

                                                                                                                                                      Sadly it seems that these things don’t sell.

                                                                                                                                                      Anyways, thanks again for writing a summary.

                                                                                                                                                      1. 2

                                                                                                                                                        I wouldn’t say Hanami lacks inspiration from java or smalltalk. e.g. the IoC bits are very similar. In general I’d say that Hanami relies on different paradigms for each layer rather than pretending one paradigm can properly address every problem. Disclaimer: I’ve only built one small project w/ Hanami.