1. 79
    1. 23

      Not sure that satire is the right label here? Humour maybe. But it’s sage advice.

    2. 13

      I found https://reidjs.github.io/grug-dev-translation/ to be much easier to read.

    3. 8

      Surprisingly good post, once I got over the Grug speak. (Funnily enough, what bothered me the most was not the Grug speak itself, but its sometimes inconsistent use. But I suspect writing good Grub speak is actually fairly hard.)

      however, one thing grug come to believe: not factor your application too early!

      Yup. Casey Muratory spoke of semantic compression: start stupid, and factor only when patterns emerge. That way you only build abstractions you actually need.

      good cut point has narrow interface with rest of system: small number of functions or abstractions that hide complexity demon internally, like trapped in crystal

      Yes. Yes! Possibly the most powerful programming advice I know of. And go read Ousterhout’s A Philosophy of Software Design, or at least watch one of his talks.

      [parsing]

      I used to search for the one true parsing paradigm. I still think a good parser generator can be very useable… if it is ultimately made to resemble recursive descent or PEG. But since I’d rather design mostly LL(1) grammars to begin with, what do I even need my fancy generator for?

    4. 7

      This seems to me to be entirely correct. Especially the section on recursive descent and visitors.

      Underappreciated here is maybe the power of immutable domain types and pure functions. Minimize state transitions at near-any cost.

    5. 5

      I think there are some good points here, but it’s disappointing to see how much time is spent talking about the dangers of complexity then the section on type systems ignores the way that types can greatly simplify code.

      some type big brain think in type systems and talk in lemmas, potential danger!

      danger abstraction too high, big brain type system code become astral projection of platonic generic turing model of computation into code base. grug confused and agree some level very elegant but also very hard do anything like record number of club inventory for Grug Inc. task at hand

      What I think this ignores is that the ROI of understanding some of the more “big brain” type system things is that they are broadly applicable and reduce the cognitive effort of working at higher levels of abstraction.

      I get the point that this stuff can be a bit hard to learn, but not everything that’s hard to learn is without value. I think the value of types are really underrepresented here.

      1. 4

        One thing Grug almost understood here is that the major benefit of type systems is how the tighten the feedback loop. Auto completion is an extremely tight feedback loop, which causes you to write the right method name before you even have to think about it. Static typing catches errors at compile time, well before you run any test. Sum types allow you to make many erroneous states unrepresentable, so again the compiler catches stuff before you run anything. Even generics (the OCaml/Haskell kind) can make sure your generic code doesn’t try to inspect the internals of the generic types they handle — again many errors caught at compile time, close to the source of the error.

        Where Grug is right to see danger however is when we use type systems in a way that doesn’t tighten the feedback loop. Without that heuristic, it’s easy to fall into the trap of serving the type system instead of making sure it serves us.

    6. 5

      I like this piece a lot, and it seems widely misunderstood.

      Complexity is a big problem.

      Big problems are big and there are many of them. We should try to keep problems small and to have fewer problems.

      Making programs do many things makes them big and complicated. It is better to have many small simple programs.

      Operating systems are programs.

      Programs go wrong.

      Smaller simpler programs are easier to fix than big complicated ones.

      So, it is better to have many small programs than fewer big ones.

      That means it is better to have many smaller simpler operating systems, than fewer bigger more complicated ones.

      It sounds like it should be easier to have one big general program so people only need to know that one, but that is not true.

      So it was appealing to have one big general operating system (you choose: Unix in general, or FOSS Unix, or Linux in particular) but that’s made life more difficult because it’s now very big and complicated.

      We were better off with more smaller simpler programs.

      OSes are programs. So, we were better off with more smaller simpler OSes.

      We could still go back.

      We should.

      1. 4

        We were better off with more smaller simpler programs.

        In general, sure. But if you have too many programs that are too small, when combining them the complexity just moves into the “glue”. This may be even harder to debug (pipefail etc).

      2. 4

        We could still go back.

        Not on our own.

        I see two big causes of complexity: letting it accrete over time, and the ludicrous diversity of hardware interfaces. Can’t make a simple OS when there are so many different devices out there that need to be driven. So the first order of business should be to standardise hardware interfaces so one driver per device type is enough. One graphics card ISA, one web cam protocol, one printer interface… you get the idea.

        Once that’s done, writing a new OS, free of all this legacy, will be possible. Right now though, the best we can do is cut down hardware support, so much so that we cannot hope to gain much traction in practice. And I’m not even talking about legacy software. Though I guess it’s less of a problem if we keep it simple — our goal all along.

        1. 2

          And that dream of one interface to rule them all will practically mean “you’ll have a blob on the device that does the actual conversion from your API to what the hardware uses.” Anything else will immediately run afoul of XKCD Standards.

          1. 1

            Considering hardware already have similar blobs right now, I’d consider this a win.

            I understand hardware is complex and quirky. Just please hardware vendor, please carry the burden of that complexity yourself, and give me a nice standard ISA (or wire protocol) to work with. Bonus point if you can make it simple as well. Do that, and I will not even care about your inscrutable proprietary binary firmware blob.

        2. 1

          Does it, though?

          OpenVMS only supports a single machine, but a handful of the leading hypervisors. That’s a very pragmatic start.

          If someone wrote a modern Novell Netware, say – a blisteringly quick file server that did nothing else – would it need drivers for anything except a a few disk interfaces and a handful of NICs?

      3. 1

        That means it is better to have many smaller simpler operating systems, than fewer bigger more complicated ones.

        Sounds like microkernels like MirageOS

        1. 1

          Sounds like microkernels like MirageOS

          TBH, not to me it doesn’t, no.

          AIUI, these are tools for programs to produce standalone application is intended to run in virtual machines. I am talking about full function complete stand-alone operating systems which may if desired run on the bare metal.

          I am not aware of any persuasive reason why for example it would be good design or good design practice to have the same operating system driving a hypervisor, a desktop computer, a pocket computer, a mobile phone, a watch, or an embedded device such as a router.

          And yet, this is the accepted wisdom today.

          1. 1

            I am talking about full function complete stand-alone operating systems which may if desired run on the bare metal.

            Operating systems that can run on bare metal have a lot of complexity because they are interacting with hardware. That’s essential complexity. You don’t want your kernel to suddenly panic and hard crash your system because it sees an interrupt it doesn’t know how to handle.

            1. 1

              Is it, though?

              Why not just target VMs? Then you have a known platform. All you need are virtual devices: virtual network connections, virtual file systems.

              If the role your OS performs does not need entire classes of hardware, then don’t initialise that kit. Don’t even attempt to drive it.

              1. 2

                Now you are talking about a completely different class of software. Initially, you said:

                I am talking about full function complete stand-alone operating systems which may if desired run on the bare metal.

                Now, you are saying:

                Why not just target VMs? Then you have a known platform.

                Which one do you mean?

                1. 2

                  (insert “why not both? meme)

                  Why not? Start on one, if it succeeds, add support for specific hardware.

                  If it doesn’t need it, why support hardware? It’s a dozen years since my last tech-support role, but I discovered that it was standard practice by then to just bung a copy of the free version of VMware ESXi onto all new servers, then install the OS on top of that. Standard virtual hardware, standard drivers, easier backup and restore, easy transfer onto dissimilar hardware.

                  This is how big IBM iron works: z Series mainframes have the PR/SM hypervisor right in their firmware. POWER servers have an equivalent: a firmware hypervisor providing LPARs by default.

                  It’s close to impossible to run on the bare metal without some level of virtualisation on some kit. Why not on COTS PC kit?

                  1. 1

                    If it doesn’t need it, why support hardware?

                    This is the exact question that MirageOS answers, by saying ‘dump all the extra stuff and package the entire operating system and the application into a single binary that can run on top of a virtualization toolkit’.

                    1. 1

                      While I can see that as a useful tool for a microservice, what if I want a web browser, a text editor, and a few other things, all at once?

                      1. 1

                        Because different tools target different use-cases. Mirage and other microkernels offer a solution to the problem of all-inclusive, self-sufficient backend services which need to run on top of virtualized hardware. They don’t claim to target every other use case.

                        1. 1

                          Right? So maybe there is plenty of room for more. Maybe Unikernels are just one answer, for a certain narrow category of apps, web services.

                          And maybe we can learn from this… there is a standard hardware platform out there, and it’s VM-emulated hardware. So all we need is a FOSS super-thin VM that could go in the firmware, like a smaller simpler FOSS VMware ESXi. And then, the other OSes can go on top and run in a VM, with the option of bypassing it completely if needed… but otherwise, they can run on anything with the hypervisor, which is itself just another specialist niche OS.

    7. 5

      I think this post could benefit significantly from a definition of the word “complexity”…

      1. 15

        Is spirit demon that enter codebase if grug not take care.

    8. 3

      I really enjoyed this post! I wish I had better tools to convey how evil is complexity! One of the things that I’ve noticed is that complexity can be introduced by trying to avoid complexity. This simple thing, we can just use yet another npm library, that other thing, there’s a tool that can generate all those files, there’s a SASS that does X so all we need to do is integrate it… and just like that we have 200+ dependencies (plus 30 medium and 5 high vulnerabilities, and NPM now so graciously informs us), we added 3k generated LOC to our codebase that nobody understands, and put ourselves at the mercy of some SASS provider and they not making any breaking changes in their API (not counting availability, rate limiting, cost…)

      And all of it was well intentioned to make something “simpler”…

      While there might be legitimate uses for the examples I’ve given, way too many times we incur in an exorbitant costs by trying to avoid something that could have been built in 1 week or less…

      But it’s so hard to explain this to other devs or PMs, at least I find it hard, because it seems so intangible :(

    9. 3

      grug note lots of fads in development, especially front end development today

      I think with frontend development, most “inventions” in the last five years have been largely superficial.

      1. 7

        Yeah, it’s unfair that frontend development keeps getting criticized for being fad oriented. I do think it’s accurate. But I don’t think it’s a skill issue or worthy of shame. The improvements are superficial and there’s alot of churn simply because the problems haven’t been solved yet. In comparison to many backend tasks:

        • Browser and device/resolution compatibility is too complex to be addressed simply. Big strides have been made, but it’s hard to see tooling as anything but frustrating complexity when you haven’t struggled with the problem it solves. Most people’s personal projects don’t reach the point where they can’t ignore those problems anymore.
        • Human factors like accessibility, design, ux, language/internationalization, and “the-user-is-a-drunken-toddler” are omnipresent. Not that they don’t exist elsewhere too, but they aren’t an implicit constraint in everything. The elegant solution you have always falls over when there are that many competing constraints.
        • The realm is still largely driven by enthusiasts compared to backend web dev. This means fashions are natural as we experiment with new solutions. It also means many projects are on more equal footing. Open source tooling for backend may start non-commercial, but because large frameworks almost always solve the problems of scale or enterprises they inevitably become commercial. This means there’s a vested interest in slowing down experimentation (and therefore fashion) for stability or to keep the money flowing.
        • Being sandboxed into a browser, being reliant on backend apis, and having non-opt-in security means the list of “default” features you need is massive. We don’t even think to call them out anymore…they’re included in bundles or defaults in generated boilerplates. You can’t really roll your own for everything or opt out of certain things like you can when you control your system’s universe/environment. You can’t really even isolate things in the same way, though it’s worth trying.
        • People are largely unsatisfied with it. Open problems lead to more attempts at a solution. Not that there aren’t open problems on the backend, but the everyday humdrum widely-applicable problems at least have a solution that generally works.

        That’s not to say things shouldn’t improve. The writer of The Grug Brained Developer is also the developer of HTMX which is an important attempt at improvement. But I hate that it’s always framed from the viewpoint of “we’re not like those other solutions” when they exactly are: it’s an experiment to tame complexity and make frontend dev either easier or produce actually functional software. I support this fashion because I like it, but there’s nothing more legitimate about this particular fashion.

        1. 2

          This was quite insightful. So, with frontend programming, there isn’t much in terms of massive problems, and they have limited areas to innovate.

          Are there any other fields like that in programming? Suppose business software programming language developers (for example - VBA) don’t have much to innovate because most of the development ecosystem is out of bounds for them.

          It sounds pretty weird to think that frontend development is widely popular yet they have some odd artificial restrictions that are imposed on browser design and “good SEO practices”. So, quite oddly innovation here is usually just about personal preferences and not about universal utility. Which is literally what fashion means.

        2. 1

          “we’re not like those other solutions” when they exactly are: it’s an experiment to tame complexity and make frontend dev either easier or produce actually functional software.

          That’s a bit reductive, no? htmx’s purported uniqueness is in its technical approach: instead of taking inspiration from native GUI toolkits and having client-side JavaScript build UI (with SSR added on top), htmx builds the plain “links-and-forms” model of the web and has the server drive the app by sending HTML responses that encode the possible interactions.

          We’re all trying to make web dev better. What else would we do? Make it deliberately worse?

          1. 3

            I don’t think so? But the question makes me think I was unclear. I fully support and even prefer htmx’s approach. I like the drive towards simplicity. Moving functionality to an area with fewer limitations and centralizing logic to minimize moving pieces is a great way to do that. I have 2 live projects using it and it does what it promises.

            My point wasn’t that htmx isn’t valid. It’s that it’s equally valid as an experiment in how to build frontend software. Since there hasn’t been a clear pattern that has emerged as the “best” way to tackle things, I don’t like the chastising of other approaches when they are simply making different tradeoffs. Many problems haven’t been solved yet. It’s reductive to those problems that people are trying to solve. We can talk about the pros and cons of approaches and make tradeoffs without shaming the industry, and we can encourage more experimentation.

            We’re all trying to make web dev better. We should be deliberately experimenting. Not writing off entire ideas as “fad based” in an attempt to frame one as separate from that social constraint. My gripe isn’t towards htmx at all, it’s at the general treatment of frontend development as broken rather than “in progress.”

            1. 1

              I understand now, and I agree with your view. Thanks for clarifying.

      2. 1

        most “inventions” in the last five years have been largely superficial.

        If you were to extend that to the last 30 years I would agree with you.

        Reducing things down to for example desktop OS design, I will submit without being two (2) important innovations in this time.

        • Using a 3D accelerator to composite together desktop windows for display.
        • Global system fulltext search.

        Both originated within Apple’s Mac OS X project.

    10. 3

      As much as I like this post, I wouldn’t divide the developers world between “grug” and “big brain”.

      I think that a big part of the useless complexity we face (especially in web development) is due to developers craving for self-importance and career/salary improvement. All this complexity created job opportunities and salary increases. And companies of all kinds/sizes just have to deal with that because it’s “modern” (and that’s where Fear Of Looking Dumb plays its role).

      I think what we need to accept is:

      • No, all dev jobs won’t be as technically challenging as being a data engineer at Facebook or front-end engineer on Google Docs ; most companies just need a proper website/webapp shipped in no time with nice extensibility and good ability to change according to user feedback
      • This whole deal is not about us: UX > DX
      • This whole deal is really not about us: this complexity and its costs implications killed many startups and many cool projects that deserved better
      1. 2

        I believe the Grug vs Big Brain dichotomy is more about self perception and programming philosophies than it is about actual abilities. Mostly, Grug is aware of his own limitations, and Big Brain is not. Often, Big Brain is smarter than Grug, but but their defining characteristic is their distinct lack of wisdom when it comes to actually aim their (sometimes) superior intellect.

        I met one such Big Brain once. His cognitive power, as far as I can tell, exceeded my own by a fair margin. He was able to work with much more complexity than I could. On the flip side he was blind to simplicity. I once had to work on his code, it was a mess. I simplified it while adding a feature, and ask for his help about a bug (that eventually was easily fixed). His immediate reaction was that of course my code was too small, it was obviously missing stuff, that’s why I had my bug (nope). He had a simple alternative to his code staring at him, and he did not believe it.

        This attitude, I believe, is a major source of complexity. People who make complex things because they just don’t know any better, and then argue from ignorance that it has to be that complex. Fools, the lot of them.

      2. 2

        I’m not so sure it’s developers adding complexity for their own benefit. I don’t think most developers like this junk either; I think it’s driven by a perception that you’re doing it wrong unless you have ten layers of abstraction everywhere.

        1. 2

          I have seen such cynicism with my own eyes. People who ended a React v. htmx debate with the following argument: “I don’t like server-side templates DX”, “ I would be willing to switch if you’re willing to rewrite your entire Django back-end with Javascript”.

          I have also seen people fear uselessness because of htmx. Because their jobs wouldn’t be needed anymore. Absolutely no sense of “the goal of our jobs is to solve users problems and, by that, make our business prosper”, no sense of “ I am a developer or a software engineer, not just a single-technology programmer”.

    11. 1

      I read it again every time it’s posted here or on HN.

      It never gets old.

    12. 1

      instead grug try to limit damage of big brain developer early in project by giving them thing like UML diagram (not hurt code, probably throw away anyway) or by demanding working demo tomorrow

      :chef-kiss:

    13. 1

      Hmm… I don’t know how best to answer this, but all things are true within their contexts - this doesn’t escape that.

      And of course there are many interpretations of this as its a general calling card to all grumpy devs worldwide, so anyone can say this means anything.

      Complexity is half the battle.

      Our software lacks crucial features like systems for real (and safe!) end user customization to get done real work.

      We need to be bold & imagine systems that are not only simpler, but also more featureful than what we have now.

      It’s often the case that complex systems are ad-hoc solutions that yearn to be use cases of a much simpler & more general system.

      1. 1

        Our software lacks crucial features like systems for real (and safe!) end user customization to get done real work.

        We need to be bold & imagine systems that are not only simpler, but also more featureful than what we have now.

        Sometimes what it takes to be featureful is the right kind of simplicity. Compare TPM 2.0 with the Tillitis key for instance. While the TPM is a complexity monster with 300 pages worth of high-level specs, the Tillitis key on the other hand specifies only 3 things: a RISC-V ISA to run arbitrary programs, a protocol to load said program, and a way to have a secret unique to the (key, program) couple (HASH(key_secret, program)). One bring the kitchen sink with it, while the other lets users make whatever part of the kitchen sink they need.

        One might argue that the Tillitis key cheated by pushing complexity to the users, but even so, user-level complexity remains less than what I had to suffer when working with a TPM. User customization in the service of simplicity, isn’t that beautiful?

        1. 1

          That’s a great example of the kind of stuff I’m talking about. I just hate when people use “complexity” as an excuse to avoid all improvements in software.

    14. 1

      We are building softwares for humans. It will always be complex. You can’t tackle complex problems with simple thinking.

      1. 4

        I disagree. Complex problems are best approached with simple thinking, allowing things to become only just as complex as the situation requires.

        Unix command composition with pipes is my favourite example of this. It’s such a simple approach, that has allowed people to build sufficiently-complex systems for decades. It worked back when people were using Unix on PDPs for text processing; it works today when people are using it to compose AWS commands.

        This pattern crops up again, and again, in life: build from the simplest foundations possible. Don’t try to bake the complexity of your target systems into your fundamental components. Let the complexity emerge in the final use cases, allowing people to focus on their domain (not yours, the framework author’s!) and limiting the blast radius of accidental complication to a single project.

        1. 2

          Strange, but I think we are on the same page :)

          If something seems simple, I’d immediately think that the complexity is hidden in one of the underlying layer. Think about how many switches do curl, grep or ffmpeg has. If you manage to assemble some “simple” oneliner, there is a tremendous amount of work done beforehand in favor of the result.

Stories with similar links:

  1. The Grug Brained Developer via WilhelmVonWeiner 1 year ago | 49 points | 5 comments