1. 25
  1. 40
    1. The dependencies of a system are part of that system and cannot be treated as a black box. The maintenance of the entire system is your responsibility.

    …and then the first item on the list is “package manager”. As I understand it, the whole point of a package manager is to treat dependencies as a black box, so I can say “install GIMP” (or whatever) and all the required dependencies magically appear without my having to understand them.

    Simplicity is definitely a noble goal, but I’ve seen a bunch of simplicity manifestos over the years, and they usually boil down to “all the things I’m comfortable with and none of the things I don’t care about”, and that’s not really simple, that’s just convenient. Real simplicity hurts to use, like Forth, or the Lambda Calculus, which is why we invented all these complex insulating layers like package managers and interactive shells and alphanumeric keyboards to begin with.

    1. 4

      I’ve seen a bunch of simplicity manifestos over the years, and they usually boil down to “all the things I’m comfortable with and none of the things I don’t care about”

      That’s a good point, and it’s spot-on. I was interested in noting that the author starts off saying that simple software breaks in simple ways. Complex software breaks in compex ways.

      I think that might be getting closer to the truth: it’s not about the tools, it’s about your interaction with the technology over time. If you could create a solution gluing wires on a breadboard, as long as you never came back to it, would it matter what tools you had used?

      We may be looking at the wrong end of the stick here. We keep looking at languages, platforms, and tools in terms of features and attributes. Their real value (or not) lies in how they interact with humans over the full development lifecycle.

      1. 4

        Forth hurts to use

        And we love the pain.

        1. 3

          It doesn’t say the installation of your system is your responsibility, but the maintenance of your system. That means that, when the package breaks, you need to be prepared to find out why and fix it. The package manager can resolve and manage dependencies for you, so long as you’re aware of what the dependencies are and acknowledge that they can come along.

        2. 11

          http://suckless.org/ might be a useful point of comparison and contrast.

          1. 14

            SImple software gets a lot less simple when it meets the complex reality of the real world, or it passes it to the user. Much of these minimal tools are the Unix equivalent of spherical cows, IMHO.

            1. 8

              libc

              STATUS: NOT YET

              Uh musl?

              1. 9

                There are many items that can be added here. For example for the base system there’s sbase from suckless; I have no idea why Drew is re-implementing his own?

                • For init system there’s runit, as well as some others like OpenRC (I have no experience with the latter though).
                • C compiler: tcc
                • GUI: fltk
                • Package manager: xbps, pacman
                • Shell: dash, mksh
                • High-level programming: lua is probably the best/simplest here.

                And probably some more … this is just from the top of my head.

                1. 6
                  • cproc is nicer than tcc in my opinion.
                  • janet is nicer than lua in my opinion.

                  But overall I agree.

                  1. 5

                    As the current maintainer of sbase, I’m also a bit confused about the re-implementation of UNIX tools. As far as I can tell, the main design goal differences are:

                    • To not reuse code between utilities, which I think is a mistake. Many tools use the same flags for symlink behavior for recursive operations, have to deal with mode strings in the same way, etc.
                    • To behave randomly when behavior is implementation defined. It is already hard enough to get the rest of the world to work with plain POSIX tools. I’ve spent quite a while making sure sbase works well with various scripts out there, for example building the Linux kernel, and I don’t see the point of making this harder than it needs to be.

                    However, I think the addition of a test suite is great. It’d be neat to see if could be extracted to a standalone project, similar to how libc-test can be used for multiple libc’s.

                    1. 2

                      The first goal to not reuse code is definitely a mistake, but I think that will become clear to them as they write more tools.

                2. 12

                  Is this a joke?

                  1. 7

                    Constraining the user to simple tools encourages creativity and allows the system to be more quickly understood. This supports the removal of complicated features.

                    “Encourages creativity” is the wrong goal. People use their computer because they want to Get Shit Done™, not as some sort of creative outlet, or to understand their system.

                    1. 10

                      Yeah, sorry, we don’t support WiFi, but you can use IrDA creatively via a bunch of mirrors so get wireless communication. It is a lot easier to understand how to encode it than the complex WiFi schemes.

                    2. 11

                      I feel like this is based on a really broken definition of simple. A better definition of simple is doing all the work up front so that your end users can do powerful things without a lot of deep work or understanding or gruesome bugs due to making non-Daniel-Bernstein-types use overly low-level languages.

                      Progress is more people doing interesting things, it has nothing to do with minimizing CPU cycles or lines of code.

                      Inspiration:

                      • Browsers, Microfiche readers
                      • SQL, Excel, Azure Data Explorer
                      • R, Julia, Spark
                      • Processing, D3.js
                      • AWS Lambda/Fargate
                      1. 8

                        The thing which you are describing is called being easy, not simple. This is a fundamental difference.

                      2. 6

                        Unix is practical but I don’t think it’s particularly simple. I can see the value in rewriting most of the stack that we depend on to be “simpler” (in terms of software complexity), but the underlying complexity of the model won’t disappear. If you’re going to rewrite everything anyway, I think it’s worth moving away from the Unix model and experimenting with “better” alternatives.

                        For instance, nearly everything is headed in the direction of dataflow based processing these days (from distributed data processing pipelines to TensorFlow graphs to Reactive user applications). Unix’s model doesn’t go to great lengths to try and make this model simple[1]. With discipline, it’s possible to do a certain amount of lazy (only-recompute-the-part-of-the-graph-you-need-type) computation with make. But it’s a brittle workflow that’s prone to mistakes. If dataflow processing was built-in, think of what other parts of the system could be built simply. Everything from the Network stack to the GUI could be built on efficient, lazy graph computations. An OS kernel and most user applications are just reactive applications with some persistent data storage, so this could greatly simplify the overall system.

                        Also, C is not the simplest language. Sure, it’s simpler than C++ (which is only the most complicated language in existence), but there are still plenty of unintuitive foot-guns that add to cognitive load. Yes, programmers should care about performance, but if the focus is on simplicity (and I’m assuming correctness), performance ought to take a backseat.

                        [1] This wasn’t the intended purpose of Unix, obviously, but it still hasn’t grown particularly well into this model. Just look at the organizational problems in data science applications for examples of some of the problems.

                        1. 2

                          I happen to agree with you regarding Unix’s drawbacks in general.

                          But I’m a bit confused by this:

                          For instance, nearly everything is headed in the direction of dataflow based processing these days (from distributed data processing pipelines to TensorFlow graphs to Reactive user applications).

                          First off, is “Reactive” a brand name, or a type of user application? Googling doesn’t really help, because I can’t add a qualifier to “reactive”.

                          Next, what basis is there in saying that dataflow based processing is the inevitable way of the future? I really don’t know much about this space, so I’d be happy to hear about concrete examples.

                          1. 4

                            Reactive

                            Whoops, I didn’t mean to capitalize that. I was referring to reactive programming (see the link from @cos). Specifically React.js.

                            Next, what basis is there in saying that dataflow based processing is the inevitable way of the future

                            Pretty much anything that is done as a graph computation can be represented cleanly in a dataflow language. The most popular one is spreadsheets (which are powerful, flexible, and user-friendly albeit difficult to maintain). Their success is indicative of some merit to the underlying model, so they warrant some attention. Another prevalent place for graph computation (i.e. dataflow) is build scripts since they try to avoid rebuilding parts of the graph that weren’t updated by a change to the codebase. There are lots of other ones too (GUIs, node editors used for procedural graphics and audio, dependency managers, etc).

                            Dataflow languages aren’t a new idea (see [Lucid](https://en.wikipedia.org/wiki/Lucid_(programming_language), Lustre, and functional languages that encourage lazy computation, like Haskell), it’s just that the paradigm is better understood now by more programmers. With the popularity of functional front-end JavaScript frameworks, like React, and a recent influx in functional programming in other areas as well (e.g. Scala and Clojure), I think there’s an opportunity to move more code into a dataflow paradigm.

                            I’m not sure that it’s the ultimate solution, but it would help if there was a unified language/platform for doing dataflow in a generic manner.

                            Also related to this, I’m of the opinion that 95% of the business logic for most applications could be rewritten as a couple of relational database queries. (The other 5% of code which handles any interactive/mutation-y/side-effect-y parts of the application can probably be written as a dataflow program). Again not a new idea (Out of the Tar Pit covers this idea in great detail), but I think relational DBs solve more problems than people give them credit for. We basically keep trying to solve the same problems that they had worked out in the 90s:

                            • In-memory dataframes have become popular, but are difficult to stream/chunk when they use more memory than is available. This problem is solved (for free) by databases.
                            • Applications applying data-oriented design (popular in e.g. game engine development), work really hard to design their C++ data structures in the form of normalized database schemas for performance/ease of serialization/better debuggability. Databases are structured in a data-oriented manner (assuming the data is laid out in a sensible manner).
                            • Web APIs that utilize REST/GraphQL/OData provide CRUD operations and query languages that all pale in comparison to the power/performance/flexibility/stability of a regular ol’ relational database. Also, JSON and XML are pretty inefficient data formats, and HTTP is overly bloated for data transfer.
                            • ps, ls, ifconfig, cat are all ad-hoc select statements from different “tables.” grep and awk are ad-hoc where clauses. uniq is a select distinct, and uniq -c is a count with a group by. There’s even join (which I would use more often if command outputs were more structured). If the OS had a standard database interface layered overtop, I wouldn’t have to remember these arcane commands, each with their own syntax[1].

                            Some other benefits include easy reflection/metaprogramming, more conventional structure than other paradigms, cache/disk-friendly access patterns, and straightforward performance tuning, but this comment is getting long as it is :P

                            Databases aren’t drop-in replacements for all of the problems listed above. You can’t just remove the REST API your company provides and give everyone on the outside a connection string to the production DB that backs it. However, if more of a focus were put on using databases as a foundation, I think things could be a lot simpler.

                            So actually I think the way of the future is relational databases[2] minimally wrapped in dataflow shells :)

                            [1] I still love Unix. I can get approximate answers with it faster than anything else. But I would prefer a little more consistency and structure. [2] To be clear, I think SQL sucks, PL/SQL and TSQL even more so, but the underlying relational model is extremely practical and IMHO easy to reason about.

                            1. 2

                              Thanks for the extensive reply, and thanks too to @cos for the link.

                              (Incidentally, I found this link to be useful as well: https://lobste.rs/s/92nd7g)

                              I’m not convinced however that Unix not being built from the ground up to support this kind of paradigm is a huge dealbreaker.

                              What I do find interesting is that a certain subsection of Unix/Linux users and programmers are dead-set against moving the platform further from its 1970s roots, but that’s a discussion for another time.

                            2. 2

                              First off, is “Reactive” a brand name, or a type of user application? Googling doesn’t really help, because I can’t add a qualifier to “reactive”.

                              I think this is what nc is talking about.

                          2. 5

                            High-level programming - STATUS: NOT YET

                            Interesting. This list is all par for the course for the “I will refuse to program in anything but ANSI C” crowd, but this guy’s a bit more tolerant to consider newer version of C and a high-level programming language at all. Normally this spot would be filled by shell scripting and awk.

                            I’m curious to see what such a programming language would look like, and under what criteria it would be judged to be “simple”.

                            1. 13

                              What surprises me in all such manifestos is that if you are reimplementing things from scratch and intentionally omit dynamic linking, why use C rather than a language with unambiguous semantics like Modula-2 or Oberon?

                              1. 13

                                I think these manifesti are mostly about paying lip service to an aesthetic of simplicity rather than being driven by any practical concerns. C is just the cool language to use in those circles.

                              2. 5

                                I linked before, but I think a simple modern lisp is a good candidate for high level language.

                                Janet is a good example.

                                1. 5

                                  I’m curious to see what such a programming language would look like, and under what criteria it would be judged to be “simple”.

                                  Neither C, Shell nor awk seem to go that way. I could imagine a “simple” language could be some subset of SML or RnRS (with n = 4 or 5 maybe?) Scheme, that is, languages often considered to be esoteric by the ANSI C crowd, though they are designed to be simple.

                                  I think it would be an interesting blog post, on what “simple” in a language constitutes. I think many people conflate “simple” with “I am proficient in it”, since C is by no means simple and languages like JavaScript are anything but simple, in fact coming from other languages you’re baffled by how complex beasts they are.

                                  1. 4

                                    Yeah why not Lua? Lua is 100% ANSI C which seems to fit the aesthetic.

                                    Personally I like writing code in Python more than Lua, but Lua seems to fit here. Janet too as mentioned.

                                  2. 5

                                    A bit of an odd list, really. If kernel and GUI are categories, why not the whole graphical desktop stack in between? And why is git mentioned, and marked as not simple, when there are plenty of non-simple projects that could fit the other categories?

                                    “Utilities” is a bit of an odd category as well. I had to look at the source tree of the example to see what it meant. Following the same logic, and considering principle 4, you could also lump compiler, assembler, binutils, libc and build system together into one item. That way, the high-level language doesn’t have to be C, in the same way the shell doesn’t have to be anything in particular.

                                    The contents of this list form a non-simple system. It’s clear by now (and from other comments) that this is one definition of “simple”, for the author specifically. Otherwise there would be no C and POSIX and you’d probably be able to simplify the list down to one item (Smalltalk, perhaps).

                                    1. 3

                                      I love this idea of “simplicity”, or rather – minimalism. I even practice it when I can. However, your average user really couldn’t care less about basically all points listed on the page. They want their system to JustWork™.

                                      The maintenance of the entire system is your responsibility.

                                      See that’s the thing, not everyone sees their system as something they want to maintain, even some who are technically inclined and can do so if they wished to. They need their machine to get out of the way and let them do their job, and not have to tinker with stuff to get basic functionality working.

                                      1. 1

                                        UTF-8 is the only permissible text encoding. Software may assume all text is UTF-8 and waste no effort on other encodings.

                                        I hope you don’t want to use old fonts that might only have strings in Mac or Windows encodings used at the time.