1. 11

    I kinda wish I hadn’t learned Vim — just so I could learn a more modern modal editor, Kakoune. Relearning the commands to be pretty much in reverse order (”word delete” instead of “delete word”) sounds like hell. Now, I did learn to type on Colemak after QWERTY.. but the Vim Muscle Memory™ seems much stronger than the QWERTY one was.

    faking multiple-cursor support by coloring parts of the buffer to look like cursors and then repeating the actual cursor’s actions at those regions

    Oh that’s how it’s actually implemented!! Dang, that’s pretty clever.

    1. 7

      After using Vim pretty much daily for nearly two decades, a month or so of light Kakoune usage was enough to wean me off Vim entirely. Vim’s a big program with a lot of features, so maybe I wasn’t using the really addictive ones, but I’ve seen similar comments from other people in #kakoune, so I’m not alone.

      1. 2
        1. 2

          The key-chording thing is definitely an issue, but one that can be worked around to some extent with mappings. I don’t think there are any two-modifier keystrokes I use regularly, so it takes me at least as long to remember them as it does to type them.

          I was actually pretty impressed with how string-quoting works in Kakoune. Although normal quoting works as you’d expect, the nestable quoting syntax %{} is, well, nestable, which means it almost never needs escaping and so deeply-nested string quoting is just about as natural as {} blocks in C.

          That said, Kakoune’s “scripting” is definitely unusual… it’s very much in the vein of basic Unix and Plan9 tools where it’s blissfully easy to hack together a solution that solves a specific problem, but trying to generalise that solution is nearly impossible. I do find that frustrating, but would I be more productive with an editor scripted in.. say, Haskell? Probably not.

          I can navigate by paragraph with [p and ]p. I don’t think I’ve ever deliberately used , in Vim or Kakoune, but for any kind of repetition I generally hold down ‘X’ to select a bunch of lines and the s command to create a selection for each thing I want to change. If I hit wi, that starts inserting before the word I just moved over; wa starts inserting after the word I just moved over (including the trailing whitespace). If you want to insert after the word but before the whitespace then yes, you’d need w;i… or just ea. I remember when I started using Kakoune, moving the selection around to the place I wanted it to be felt a bit like a sliding block puzzle, or a game of Snake, where I had to think about where my tail was going to end up as well as getting my head in the right place. I got used to it pretty quickly, though.

          In Kakoune as well as in Vim, I tend to open an editor session in the root directory of my project and open files with relative paths from there (unless I can get there with gf, or I have a handy “find” command that autocompletes all the filenames in my project so I can just type a few letters of the file I want to open). I guess I can see the appeal of having a current directory per buffer, but I guess I’ve learned to think of my projects’s structure from a top-down perspective rather than bottom up.

          At the end of the day, though, Kakoune’s got multiple cursors, and I don’t think I’ll ever be able to give that up.

      2. 1

        Why would you want to switch from modal vim to Kakoune? After browsing the top features of Kakoune, I didn’t see anything there that’s either not supported natively in vim, or couldn’t be enabled by a plugin or two.. So I’m genuinely curious what the motivation is behind the desire to switch.

        1. 2

          It’s not about features, it’s about this:

           Faster as in less keystrokes

          https://kakoune.org/why-kakoune/why-kakoune.html#_improving_on_the_editing_model

          1. 1

            Thanks, wow, that is an incredible concept. I guess I should have spent more time clicking around. Seems like they’d want to call that out on the front page!

            1. 1

              “Faster as in less keystrokes” is what’s on the front page. Yeah I guess they could’ve linked that text to the why page..

              1. 1

                Sure, but it literally does not describe wtf that means. I had assumed they were just referring to macro support…

        1. 4

          Did anybody click on the “Programmer-archeologist” link and find that the subject of the referenced thread is a broken link? Archive link.

          A subject close to my heart; a few years ago I transcribed that passage from “A Deepness in the Sky”.

          1. 8

            This is fantastic. Just watching the screencast is getting my creative juices flowing. What would a shell that always had this look like? We’d need some way to handle destructive commands like rm. Or commands that don’t just take time to run but also are expensive to rerun.

            1. 9

              <3 :)

              Actually, after I imagined it, I found it kinda hard to believe nobody seems to have done it earlier?… I think the pieces were mostly all there since the dawn of Unix, or before still.

              For me, Luna is the logical next step. But I got the idea before I learnt about them, and Luna is still a bit too early to be usable in this use case, so I couldn’t restrain myself to wait any longer. But I would totally love to concede to them in the long term, and I’m looking forward to a future where Luna reigns…

              As to rm… I was kinda scared after my first idea, that it would be dangerous… but then, it’s hard to accidentally type rm, and you need to give it some params anyway… So, in the end, I couldn’t think of any actual commands that would be more-than-typically risky here. That said, I have an idea for a means to protect against some simplest accidents. But, uh, I have really many ideas for up, and I really wanted to share as soon as possible, in some useful enough form… I myself find it hard to live without it at my hand’s reach, now that I know it exists.

              edit: Personally, I’m kinda stoked about one more idea I mentioned in the readme: of capturing stdout of already running processes… like if you do something, then be surprised it takes longer than expected… not wanting to kill it now and restart with a pipe… just run a capture command after the fact, maybe plug it in to up, and go on like nothing happened… :) this could be a totally separate tool. In fact, AFAIK there are already some like this, but I think they may need some refreshing…

              edit 2: as to rm… uh, oh, now I think of it, a tool is not really Unixy if you can’t potentially hurt yourself with it, amirite? ;P

              1. 7

                Actually, after I imagined it, I found it kinda hard to believe nobody seems to have done it earlier?… I think the pieces were mostly all there since the dawn of Unix, or before still.

                It was “done” before. Pipecut which seems to never have been released. The author wanted to “clean up some code first” and there you go. :( Perfection is the enemy of something or other.

                Glad someone actually finished something like this.

                1. 3

                  I remember watching the video for that (with slides) thinking it was a great idea. Not sure what happened to it, but I’m glad somebody came up with the good idea and is moving forward with it.

                  Note to @akavel, please go through pipecuts ideas (even if you don’t use them). One of the things I remember is that he had thought out a lot of the design, so you can figure out some of the design decisions even faster.

                  1. 1

                    Oh, awesome, thanks a ton! I didn’t notice the links to slides and video when glancing through their website, and your kind and thoughtful recommendation makes me really want to check them. Thanks!

                  2. 1

                    At this time, the author wants to clean up some portions of the code … The code will be published at code.google.com/p/pipecut

                    Wow —haven’t seen a Google Code link in a while 😄

                    1. 1

                      Oh, interesting, thanks! I’ll add it as “prior art” in the readme then.

                    2. 2

                      ’m kinda stoked about one more idea I mentioned in the readme: of capturing stdout of already running processes… like if you do something, then be surprised it takes longer than expected… not wanting to kill it now and restart with a pipe… just run a capture command after the fact, maybe plug it in to up, and go on like nothing happened… :)

                      https://github.com/nelhage/reptyr

                      1. 2

                        Yes :) Thanks! Also potentially: neercs, injcode. I just haven’t found time to research them enough yet to learn how to make them cooperate best…

                  1. 3

                    What’s the portability story across these differently customized processors, anybody know?

                    1. 1

                      Wouldn’t that be like SSE or AltiVec for instance, where some new instructions are available and detectable at runtime? I’m personally really curious about performance vs ARM or even x86 architectures.

                      1. 4

                        re: performance

                        https://www.sifive.com/products/risc-v-core-ip/u5/u54-mc/ (tl;dr ARM v8ish) Note most ASICs (there are only a few) to date so far are in the 32Bit MCU scale (ie m0/m4) .the U54 is the only silicon (so far) that is Linux capable.

                        These guys are doing some interesting things performance wise https://www.esperanto.ai/ but a few years away at least.

                    1. 4

                      @akkartik what are your thoughts on having many little languages floating around?

                      1. 9

                        I see right through your little ploy to get me to say publicly what I’ve been arguing privately to you :) Ok, I’ll lay it out.

                        Thanks for showing me this paper! I’d somehow never encountered it before. It’s a very clear exposition of a certain worldview and way of organizing systems. Arguably this worldview is as core to Unix as “do one thing and do it well”. But I feel this approach of constantly creating small languages at the drop of a hat has not aged well:

                        • Things have gotten totally insane when it comes to the number of languages projects end up using. A line of Awk here, a line of Sed there, makefiles, config files, m4 files, Perl, the list goes on and on. A newcomer potentially may want to poke at any of these, and now (s)he may have to sit with a lengthy manpage for a single line of code. (Hello man perl with your 80+ parts.) I’m trying to find this egregious example in my notes, but I noticed a year or two ago that some core Ruby project has a build dependency on Python. Or vice versa? Something like that. The “sprawl” in the number of languages on a modern computer has gotten completely nuts.

                        • I think vulnerabilities like Shellsock are catalyzing a growing awareness that every language you depend on is a potential security risk. A regular tool is fairly straightforward: you just have to make sure it doesn’t segfault, doesn’t clobber memory out of bounds, doesn’t email too many people, etc. Non-trivial but relatively narrow potential for harm. Introduce a new language, though, and suddenly it’s like you’ve added a wormhole into a whole new universe. You have to guard against problems with every possible combination of language features. That requires knowing about every possible language feature. So of course we don’t bother. We just throw up our arms and hope nothing bad happens. Which makes sense. I mean, do you want to learn about every bone-headed thing somebody threw into GNU make?!

                        Languages for drawing pictures or filling out forms are totally fine. But that’s a narrower idea: “little languages to improve the lives of non-programmers”. When it comes to “little languages for programmers” the inmates are running the asylum.

                        We’ve somehow decided that building a new language for programmers is something noble. Maybe quixotic, but high art. I think that’s exactly wrong. It’s low-brow. Building a language on top of a platform is the easy expedient way out, a way to avoid learning about what already exists on your platform. If existing languages on your platform make something hard, hack the existing languages to support it. That is the principled approach.

                        1. 4

                          I think the value of little languages comes not from what they let you do, but rather what they wont let you do. That is, have they stayed little? Your examples such as Perl, Make etc are those languages that did not stay little, and hence, no longer as helpful (because one has to look at 80+ pages to understand the supposedly little language). I would argue that those that have stayed little are still very much useful and does not contribute to the problem you mentioned (e.g. grep, sed, troff, dc – although even these have been affected by feature creep in the GNU world).

                          Languages for drawing pictures or filling out forms are totally fine. But that’s a narrower idea: “little languages to improve the lives of non-programmers”. When it comes to “little languages for programmers” the inmates are running the asylum.

                          This I agree with. The little languages have little to do with non-programmers; As far as I am concerned, their utility is in the discipline they impose.

                          1. 3

                            On HN a counterpoint paper was posted. It argues that using embedded domain specific languages is more powerful, because you can then compose them as needed, or use the full power of the host language if appropriate.

                            Both are valid approaches, however I think that if we subdivide the Little Languages the distinction becomes clearer:

                            • languages for describing something (e.g. regular expression, format strings, graph .dot format, LaTeX math equations, etc.) that are usable both from standalone UNIX tools, and from inside programming languages
                            • languages with a dedicated tool (awk, etc.) that are not widely available embedded inside other programming languages. Usually these languages allow you to perform some actions / transformations

                            The former is accepted as “good” by both papers, in fact the re-implementation of awk in Scheme from the 2nd paper uses regular expressions.

                            The latter is limited in expressiveness once you start using them for more than just ad-hoc transformations. However they do have an important property that contributes to their usefulness: you can easily combine them with pipes with programs written in any other language, albeit only as streams of raw data, not in a type-safe way.

                            With the little language embedded inside a host language you get more powerful composition, however if the host language doesn’t match that of the rest of your project, then using it is more difficult.

                            1. 3

                              First, a bit of critique on Olin Shivers’ paper!

                              • He attacks the little languages as ugly, idiosyncratic, and limited in expressiveness. While the first two is subjective, I think he misses the point when he says they are limited in expressiveness. That is sort of the point.
                              • Second, he criticizes that a programmer has to implement an entire language including loops, conditionals, variables, and subroutines, and these can lead to suboptimal design. Here again, in a little language, each of these structures such as variables, conditionals, and loops should not be included unless there is a very strong argument for the inclusion of it. The rest of the section (3) is more an attack on incorrectly designed little languages than on the concept of little languages per say. The same attacks can be leveled against his preferred approach of embedding a language inside a more expressive language.

                              For me, the whole point of little languages has been the discipline they impose. They let me remove considerations of other aspects of the program, and focus on a small layer or stage at a time. It helps me compose many little stages to achieve the result I want in a very maintainable way. On the other hand, while embedding, as Shivers observes, the host language is always at hand, and the temptation for a bit of optimization is always present. Further, the host language does not always allow the precise construction one wants to use, and there is an impedance mismatch between the domain lingo and what the host language allows (as you also have observed). For example, see the section 5.1 on the quoted paper by Shivers.

                              My experience has been that, programs written in the fashion prescribed by Shivers often end up much less readable than little languages with pipe line stages approach.

                              1. 1

                                That’s tantalizing. Do you have any examples of a large task built out of little stages, each written in its own language?

                                1. 2

                                  My previous reply was a bit sparse. Since I have a deadline coming up, and this is the perfect time to write detailed posts in the internet, here goes :)

                                  In an earlier incarnation, I was an engineer at Sun Microsystems (before the Oracle takeover). I worked on the iPlanet[1] line of web and proxy servers, and among other things, I implemented the command line administration environment for these servers[2] called wadm. This was a customized TCL environment based on Jacl. We chose Jacl as the base after careful study, which looked at both where it was going to be used most (as an interactive shell environment), as well as its ease of extension. I prefer to think of wadm as its own little language above TCL because it had a small set of rules beyond TCL such as the ability to infer right options based on the current environment that made life a bit more simpler for administrators.

                                  At Sun, we had a very strong culture of testing, with a dedicated QA team that we worked closely with. Their expertise was the domain of web and proxy servers rather than programming. For testing wadm, I worked with the QA engineers to capture their knowledge as test cases (and to convert existing ad-hoc tests). When I looked at existing shell scripts, it struck me that most of the testing was simply invoke a command line and verify the output. Written out as a shell script, these may look ugly for a programmer because the scripts are often flat, with little loops or other abstractions. However, I have since come to regard them as a better style for the domain they are in. Unlike in general programming, for testing, one needs to make the tests as simple as possible, and loops and subroutines often make simple stuff more complicated than it is. Further, tests once written are almost never reused (as in, as part of a larger test case), but only rerun. Further, what we needed was a simple way to verify the output of commands based on some patterns, the return codes, and simple behavior such as response to specific requests, and contents of a few administration files. So, we created a testing tool called cat (command line automation tool) that essentially provided a simple way to run a command line and verify its result. This was very similar to expect[3]. It looked like this

                                  wadm> list-webapps --user=admin --port=[ADMIN_PORT] --password-file=admin.passwd --no-ssl
                                  /web-admin/
                                  /localhost/
                                  =0
                                  
                                  wadm> add-webapp --user=admin --port=[ADMIN_PORT] --password-file=admin.passwd --config=[HOSTNAME] --vs=[VIRTUAL_SERVER] --uri=[URI_PATH]
                                  =0 
                                  

                                  The =0 implies return code would be 0 i.e success. For matching, // represented a regular expression, “” represented a string, [] represented a shell glob etc. Ordering was not important, and all matches had to succeed. the names in square brackets were variables that were passed in from command line. If you look at our man pages, this is very similar to the format we used in the man pages and other docs.

                                  Wadm had two modes – stand alone, and as a script (other than the repl). For the script mode, the file containing wadm commands was simply interpreted as a TCL script by wadm interpreter when passed as a file input to the wadm command. For stand alone mode wadm accepted a sub command of the form wadm list-webapps --user=admin ... etc. which can be executed directly on the shell. The return codes (=0) are present only in stand alone mode, and do not exist in TCL mode where exceptions were used. With the test cases written in cat we could make it spit out either a TCL script containing the wadm commands, or a shell script containing stand alone commands (It could also directly interpret the language which was its most common mode of operation). The advantage of doing it this way was that it provided the QA engineers with domain knowledge an easy environment to function. The cat scripts were simple to read and maintain. They were static, and eschewed complexities such as loops, changing variable values, etc, and could handle what I assumed to be 80% of the testing scenarios. For the 80% of the remaining 20%, we provided simple loops and loop variables as a pre-processor step. If the features of cat were insufficient, engineers were welcome to write their test cases in any of perl, tcl, or shell (I did not see any such scripts during my time there). The scripts spat out by cat were easy to check and were often used as recipes for accomplishing particular tasks by other engineers. All this was designed and implemented in consultation with QA Engineers with their active input on what was important, and what was confusing.

                                  I would say that we had these stages in the end:

                                  1. The preprocessor that provides loops and loop variables.
                                  2. cat that provided command invocation and verification.
                                  3. wadm that provided a custom TCL+ environment.
                                  4. wadm used the JMX framework to call into the webserver admin instance. The admin instance also exposed a web interface for administration.

                                  We could instead have done the entire testing of web server by just implementing the whole testing in Java. While it may have been possible, I believe that splitting it out to stages, each with its own little language was better than such a step. Further, I think that keeping the little language cat simple (without subroutines, scopes etc) helped in keeping the scripts simple and understandable with little cognitive overhead by its intended users.

                                  Of course, each stage had existence on its own, and had independent consumers. But I would say that the consumers at each stage could chosen to have used any of the more expressive languages above them, and chose not to.

                                  1: At the time I worked there, it was called the Sun Java System product line.

                                  2: There existed a few command lines for the previous versions, but we unified and regularized the command line.

                                  3: We could not use expect as Jacl at that time did not support it.

                                  1. 1

                                    Surely, this counts as a timeless example?

                                    1. 1

                                      I thought you were describing decomposing a problem into different stages, and then creating a separate little DSL for each stage. Bentley’s response to Knuth is just describing regular Unix pipes. Pipes are great, I use them all the time. But I thought you were describing something more :)

                                      1. 1

                                        Ah! From your previous post

                                        A line of Awk here, a line of Sed there, makefiles, config files, m4 files, Perl, the list goes on and on … If existing languages on your platform make something hard, hack the existing languages to support it. That is the principled approach.

                                        I assumed that you were against that approach. Perhaps I misunderstood. (Indeed, as I re-read it, I see that I have misunderstood.. my apologies.)

                                        1. 1

                                          Oh, Unix pipes are awesome. Particularly at the commandline. I’m just wondering (thinking aloud) if they’re the start of a slippery slope.

                                          I found OP compelling in the first half when it talks about PIC and the form language. But I thought it went the wrong way when it conflated those phenomena with lex/yacc/make in the second half. Seems worth adding a little more structure to the taxonomy. There are little languages and little languages.

                                          Languages are always interesting to think about. So even as I consciously try to loosen their grip on my imagination, I can’t help but continue to seek a more steelman defense for them.

                              2. 2

                                Hmm, I think you’re right. But the restrictions a language imposes have nothing to do with how little it is. Notice that Jon Bentley calls PIC a “big little language” in OP. Lex and yacc were tiny compared to their current size, and yet Jon Bentley’s description of them in OP is pretty complex.

                                I’m skeptical that there’s ever such a thing as a “little language”. Things like config file parsers are little, maybe, but certainly by the time it starts looking like a language (as opposed to a file format) it’s well on its way to being not-little.

                                Even if languages can be little, it seems clear that they’re inevitably doomed to grow larger. Lex and Yacc and certainly Make have not stood still all these years.

                                So the title seems a misnomer. Size has nothing to do with it. Rust is not small, and yet it’s interesting precisely because of the new restrictions it imposes.

                              3. 3

                                I use LPeg. It’s a Lua module that implements Parsing Expression Grammars and in a way, it’s a domain specific language for parsing text. I know my coworkers don’t fully understand it [1] but I find parsing text via LPeg to be much easier than in plain Lua. Converting a name into its Soundex value is (in my opinion) trivial in LPeg. LPeg even comes with a sub-module to allow one to write BNF (here’s a JSON parser using that module). I find that easier to follow than just about any codebase you could present.

                                So, where does LPeg fall? Is it another language? Or just an extension to Lua?

                                I don’t think there’s an easy answer.

                                [1] Then again, they have a hard time with Lua in general, which is weird, because they don’t mine Python, and if anything, Lua is simpler than Python. [2]

                                [2] Most programmers I’ve encountered have a difficult time working with more than one or two languages, and it takes them a concerted effort to “switch” to a different language. I don’t have that issue—I can switch among languages quite easily. I wonder if this has something to do with your thoughts on little languages.

                                1. 2

                                  I think you are talking about languages that are not little, with large attack surfaces. If a language has a lengthy man page, we are no longer speaking about the same thing.

                                  Small configuration DSLs (TOML, etc), text search DSLs (regex, jq, etc), etc are all marvelous examples of small languages.

                                  1. 1

                                    My response to vrthra addresses this. Jon Bentley’s examples aren’t all that little either.[1] And they have grown since, like all languages do.

                                    When you add a new language to your project you aren’t just decorating your living room with some acorns. You’re planting them. Prepare to see them grow.

                                    [1] In addition to the quote about “big little language”, notice the “fragment of the Lex description of PIC” at the start of page 718.

                                    1. 1

                                      What, so don’t create programming languages because they will inevitably grow? What makes languages different from any other interface? In my experience, interfaces also tend to grow unless carefully maintained.

                                      1. 2

                                        No, that’s not what I mean. Absolutely create programming languages. I’d be the last to stop you. but also delete programming languages. Don’t just lazily add to the pile of shit same as everybody else.

                                        And yes, languages are exactly the same as any other interface. Both tend to grow unless carefully maintained. So maintain, dammit!

                              1. 5

                                This was great. I particularly appreciated the story about “kludges” 😂

                                Several quotes apply to far more than Lisp:

                                • “The Hamster Wheel of Backwards Incompatibility we deal with every day is a fact of life in most modern languages, though some are certainly better than others.. I try to stick to fewer than ten or so dependencies for my applications and no more than two or three for my libraries (preferably zero, if possible), but I’m probably a bit more conservative than most folks. I really don’t like the Hamster Wheel.”

                                • ” Skimming is a very useful skill to practice as a programmer. I think it’s better for authors to err on the side of explaining too much when writing books and documentation — expert readers should be comfortable skimming if you explain too much, but new users will be stuck wallowing in confusion if you’re too terse. Creating hours of newbie misery and confusion to save a few flicks of an expert’s scroll wheel is a poor tradeoff to make.”

                                • “If your arrow keys and backspace don’t work in the REPL, use rlwrap to fix that. rlwrap is a handy tool to have in your toolbox anyway.”

                                1. 2
                                  1. 1

                                    Where did you find this, @nicebyte?

                                    1. 1

                                      I am the author

                                      1. 1

                                        I get that. I meant, where did you find the quine you describe?

                                        1. 1

                                          Oh, the author posted it on twitter and i came across it via retweets/likes

                                    1. 6

                                      Extensibility and re-usability are potential goals for system boundaries only. ..creating something that’s re-usable is pretty inherently about creating something that’s a system boundary, to some degree or another. And if you’re knowingly working on a system boundary… you’re knowingly working on something that supposed to be re-usable already. It’s pretty redundant to hail it as a design goal.

                                      This is great. I’ve been collecting anti-reuse, anti-abstraction links. Everytime I add one to my collection I’m going to share the whole thing.

                                      1. 4

                                        The part you quoted is also followed by:

                                        System boundaries are the danger zones of design. To whatever extent possible, we don’t want to create them unnecessarily. Any mistakes we make there are frozen, under threat of expensive breaking changes to fix.

                                        This is a nice counter to the approach of “all classes should be isolated”, “inject everything”, “never use ‘new’”, “mock everything”, etc. that I’ve encoutered in old jobs. It turns the implementation detail of internal code organisation boundaries (i.e. classes) into faux system boundaries, which makes them much more rigid and burdensome to change, slowing us down and discouraging refactoring.

                                      1. 34

                                        Good talk.

                                        I recently used systemd “in anger” for the first time on a raspi device to orchestrate several scripts and services, and I was pleasantly surprised (but also not surprised, because the FUD crowd is becoming more and more fingerprintable to me). systemd gives me lifecycle, logging, error handling, and structure, declaratively. It turns out structure and constraints are really useful, this is also why go has fast dependency resolution.

                                        It violates unix philosohpy

                                        That accusation was also made against neovim. The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about.

                                        1. 22

                                          The declarative units are definitely a plus. No question.

                                          I was anti-systemd when it started gaining popularity, because of the approach (basically kitchen-sinking a lot of *NIX stuff into a single project) and the way the project leader(s) respond to criticism.

                                          I’ve used it since it was default in Debian, and the technical benefits are very measurable.

                                          That doesnt mean the complaints against it are irrelevant though - it does break the Unix philosophy I think most people are referring to:

                                          Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.

                                          1. 30

                                            If you believe composability (one program’s output is another program’s input) is an important part of The Unix Philosophy, then ls violates it all day long, always has, likely always will. ls also violates it by providing multiple ways to sort its output, when sort is right there, already doing that job. Arguably, ls formatting its output is a violation of Do One Thing, because awk and printf exist, all ready to turn neat columns into human-friendly text. My point is, The Unix Philosophy isn’t set in stone, and never has been.

                                            1. 7

                                              Didn’t ls predate the Unix Philosophy? There’s a lot of crufthistory in unix. dd is another example.

                                              None of that invalidates the philosophy that arose through an extended design exploration and process.

                                              1. 4

                                                nobody said it’s set in stone; it’s a set of principles to be applied based on practicality. like any design principle, it can be applied beyond usefulness. some remarks:

                                                • i don’t see where ls violates composability. the -l format was specifically designed to be easy to grep.
                                                • the sorting options are an example of practicality. they don’t require a lot of code, and would be much more clumsy to implement as a script (specifically when you don’t output the fields you’re sorting on)
                                                • about formatting, i assume you’re referring to columniation, which to my knowledge was not in any version of ls released by Bell Labs. checking whether stdout is a terminal is indeed an ugly violation.
                                                1. 6

                                                  i don’t see where ls violates composability. the -l format was specifically designed to be easy to grep.

                                                  People have written web pages on why parsing the output of ls is a bad idea. Using ls -l doesn’t solve any of these problems.

                                                  As a matter of fact, the coreutils people have this to say about parsing the output of ls:

                                                  However ls is really a tool for direct consumption by a human, and in that case further processing is less useful. For futher processing, find(1) is more suited.

                                                  Moving on…

                                                  the sorting options are an example of practicality. they don’t require a lot of code, and would be much more clumsy to implement as a script (specifically when you don’t output the fields you’re sorting on)

                                                  This cuts closer to the point of what we’re saying, but here I also have to defend my half-baked design for a True Unix-y ls Program: It would always output all the data, one line per file, with filenames quoted and otherwise prepared such that they always stick to one column of one line, with things like tab characters replaced by \t and newline characters replaced by \n and so on. Therefore, the formatting and sorting programs always have all the information.

                                                  But, as I said, always piping the output of my ls into some other script would be clumsier; it would ultimately result in some “human-friendly ls” which has multiple possible pipelines prepared for you, selectable with command-line options, so the end result looks a lot like modern ls.

                                                  about formatting, i assume you’re referring to columniation, which to my knowledge was not in any version of ls released by Bell Labs. checking whether stdout is a terminal is indeed an ugly violation.

                                                  I agree that ls shouldn’t check for a tty, but I’m not entirely convinced no program should.

                                                  1. 4

                                                    just because some people discourage composing ls with other programs doesn’t mean it’s not the unix way. some people value the unix philosophy and some don’t, and it’s not surprising that those who write GNU software and maintain wikis for GNU software are in the latter camp.

                                                    your proposal for a decomposed ls sounds more unixy in some ways. but there are still practical reasons not to do it, such as performance and not cluttering the standard command lexicon with ls variants (plan 9 has ls and lc; maybe adding lt, lr, lu, etc. would be too many names just for listing files). it’s a subtle point in unix philosophy to know when departing from one principle is better for the overall simplicity of the system.

                                              2. 25

                                                With all due respect[1], did your own comment hit your fingerprint detector? Because it should. It’s extrapolating wildly from one personal anecdote[2], and insulting a broad category of people without showing any actual examples[3]. Calling people “markov chains” is fun in the instant you write it, but contributes to the general sludge of ad hominem dehumanization. All your upvoters should be ashamed.

                                                [1] SystemD arouses strong passions, and I don’t want this thread to devolve. I’m pointing out that you’re starting it off on the wrong foot. But I’m done here and won’t be responding to any more name-calling.

                                                [2] Because God knows, there’s tons of badly designed software out there that has given people great experiences in the short term. Design usually matters in the long term. Using something for the first time is unlikely to tell you anything beyond that somebody peephole-optimized the UX. UX is certainly important, rare and useful in its own right. But it’s a distinct activity.

                                                [3] I’d particularly appreciate a link to NeoVim criticism for being anti-Unix. Were they similarly criticizing Vim?

                                                1. 9

                                                  [3] I’d particularly appreciate a link to NeoVim criticism for being anti-Unix. Were they similarly criticizing Vim?

                                                  Yes, when VIM incorporated a terminal. Which is explicitly against its design goals. From the VIM 7.4 :help design-not

                                                  VIM IS... NOT                                           *design-not*
                                                  
                                                  - Vim is not a shell or an Operating System.  You will not be able to run a
                                                    shell inside Vim or use it to control a debugger.  This should work the
                                                    other way around: Use Vim as a component from a shell or in an IDE.
                                                    A satirical way to say this: "Unlike Emacs, Vim does not attempt to include
                                                    everything but the kitchen sink, but some people say that you can clean one
                                                    with it.  ;-)"
                                                  

                                                  Neo-VIM appears to acknowledge their departure from VIM’s initial design as their :help design-not has been trimmed and only reads:

                                                  NVIM IS... NOT                                          design-not
                                                  
                                                  Nvim is not an operating system; instead it should be composed with other
                                                  tools or hosted as a component. Marvim once said: "Unlike Emacs, Nvim does not
                                                  include the kitchen sink... but it's good for plumbing."
                                                  

                                                  Now as a primarily Emacs user I see nothing wrong with not following the UNIX philosophy, but at it is clear that NeoVIM has pushed away from that direction. And because that direction was an against their initial design it is reasonable for users that liked the initial design to criticism NeoVIM because moving further away from the UNIX philosophy.

                                                  Not that VIM hadn’t already become something more than ‘just edit text’, take quickfix for example. A better example of how an editor can solve the same problem by adhering to the Unix Philosophy of composition through text processing would be Acme. Check out Acme’s alternative to quickfix https://youtu.be/dP1xVpMPn8M?t=551

                                                  1. 0

                                                    akkartik, which part of my comment did you identify with? :) FWIW, I’m fond of you personally.

                                                    I’d particularly appreciate a link to NeoVim criticism for being anti-Unix

                                                    Every single Hacker News thread about Neovim.

                                                    Were they similarly criticizing Vim?

                                                    Not until I reply as such–and the response is hem-and-haw.

                                                    1. 9

                                                      To be fair I don’t think the hacker news hive mind is a good judge of anything besides what is currently flavour of the week.

                                                      Just yesterday I had a comment not just downvoted but flagged and hidden-by-default, because I suggested Electron is a worse option than a web app.

                                                      HN is basically twitter on Opposite Day: far too happy to remove any idea even vaguely outside what the group considers “acceptable”.

                                                      1. 4

                                                        Indeed, I appreciate your comments as well in general. I wasn’t personally insulted, FWIW. But this is precisely the sort of thing I’m talking about, the assumption that someone pushing back must have their identity wrapped up in the subject. Does our community a disservice.

                                                        1. 0

                                                          OTOH, I spent way too much of my life taking the FUD seriously. The mantra-parroting drive-by comments that are common in much of the anti-systemd and anti-foo threads should be pushed back. Not given a thoughtful audience.

                                                          1. 2

                                                            Totally fair. Can you point at any examples?

                                                            1. 3

                                                              https://news.ycombinator.com/item?id=7289935

                                                              The old Unix ways are dying… … Vim is, in the spirit of Unix, a single purpose tool: it edits text.

                                                              https://news.ycombinator.com/item?id=10412860

                                                              thinks that anything that is too old clearly has some damage and its no longer good technology, like the neovim crowd

                                                              Also just search for “vim unix philosophy” you’ll invariably find tons of imaginary nonsense:

                                                              https://hn.algolia.com/?query=vim%20unix%20philosophy&sort=byPopularity&prefix&page=0&dateRange=all&type=comment

                                                              Please don’t make me search /r/vim :D

                                                              1. 4

                                                                thinks that anything that is too old clearly has some damage and its no longer good technology, like the neovim crowd

                                                                That’s not saying that neovim is ‘anti-Unix philosophy’, it’s saying that neovim is an example of a general pattern of people rewriting and redesigning old things that work perfectly well on the basis that there must be something wrong with anything that’s old.

                                                                Which is indeed a general pattern.

                                                                1. 1

                                                                  That’s not saying that neovim is ‘anti-Unix philosophy’

                                                                  It’s an example of (unfounded) fear, uncertainty, and doubt.

                                                                  rewriting and redesigning old things that work perfectly well on the basis that there must be something wrong with anything that’s old.

                                                                  That’s a problem that exists, but attaching it to project X out of habit, without justification, is the pattern I’m complaining about. In Neovim’s case it’s completely unfounded and doesn’t even make sense.

                                                                  1. 1

                                                                    It’s not unfounded. It’s pretty obvious that many of the people advocating neovim are doing so precisely because they think ‘new’ and ‘modern’ are things that precisely measure the quality of software. They’re the same people that change which Javascript framework they’re using every 6 weeks. They’re not a stereotype, they’re actual human beings that actually hold these views.

                                                                    1. 2

                                                                      Partial rewrite is one of the fastest ways to hand off software maintainership, though. And vim needed broader maintainer / developer community.

                                                                      1. 0

                                                                        Vim’s maintainer/developer community is more than sufficient. It’s a highly extensible text editor. Virtually anything can be done with plugins. You don’t need core editor changes very often if at all, especially now that the async stuff is in there.

                                                                        1. 3

                                                                          You don’t need core editor changes very often if at all, especially now that the async stuff is in there.

                                                                          Which required pressure from NeoVim, if I understood the situation correctly. Vim is basically a one-man show.

                                                                2. 2

                                                                  Thanks :) My attitude is to skip past crap drive-by comments as beneath notice (or linking). But I interpreted you to be saying FUD (about SystemD) that you ended up taking seriously? Any of those would be interesting to see if you happen to have them handy, but no worries if not.

                                                                  Glad to have you back in the pro-Neovim (which is not necessarily anti-Vim) camp!

                                                      2. 20

                                                        What is FUD is this sort of comment: the classic combination of comparing systemd to the worst possible alternative instead of the best actual alternative with basically claiming everyone that disagrees with you is a ‘slashdot markov chain’ or similar idiotic crap.

                                                        On the first point, there are lots of alternatives to sysvinit that aren’t systemd. Lots and lots and lots. Some of them are crap, some are great. systemd doesn’t have a right to be compared only to what it replaced, but also all the other things that could have replaced sysvinit.

                                                        On the second point, it’s just bloody rude. But it also shows you don’t really understand what people are saying. ‘I think [xyz] violates the unix philosophy’ is not meaningless. People aren’t saying it for fun. They’re saying it because they think it’s true, and that it’s a bad thing. If you don’t have a good argument for the Unix philosophy not matter, or you think systemd doesn’t actually violate it, please go ahead and explain that. But I’ve never actually seen either of those arguments. The response to ‘it violates the Unix philosophy’ is always just ‘shut up slashdotter’. Same kind of comment you get when you say anything that goes against the proggit/hn hivemind that has now decided amongst other things that: microsoft is amazing, google is horrible, MIT-style licenses are perfect, GPL-style licenses are the devil-incarnate, statically typed languages are perfect, dynamically typed languages are evil, wayland is wonderful, x11 is terrible, etc.

                                                        1. 8

                                                          claiming everyone that disagrees with you is a ‘slashdot markov chain’ or similar idiotic crap

                                                          My claim is about the thoughtless shoveling of groundless rumors. Also I don’t think my quip was idiotic.

                                                          there are lots of alternatives to sysvinit that aren’t systemd

                                                          That’s fine, I never disparaged alternatives. I said: systemd is good and I’m annoyed that the grumblers said it wasn’t.

                                                          1. 2

                                                            It’s not good though, for all the reasons that have been said. ‘Better than what you had before’ and ‘good’ aren’t the same thing.

                                                            1. 1

                                                              seriously. If you don’t like systemd, use something else and promote its benefits. Tired of all the talking down of systemd. It made my life so much easier.

                                                              1. 1

                                                                seriously. If you like systemd, use it and shut up about it. Tired of all the talking up of systemd as if it’s actually any better than its alternatives, when it is objectively worse, and is poorly managed by nasty people.

                                                                1. 4

                                                                  Have you watched the video this thread is about? Because you really sound like the kind of dogmatist the presenter is talking about.

                                                                  If you like systemd, use it and shut up about it

                                                                  Also, isn’t this a double-standard, since when it comes to complaining about systemd, this attitude doesn’t seem that prevalent.

                                                                  1. 2

                                                                    No, because no other tool threatens the ecosystem like systemd does.

                                                                    Analogy: it wasn’t a double-standard 10 years ago to complain about Windows and say ‘if you like Windows, use it and shut up about it’.

                                                                    1. 3

                                                                      I see this kind of vague criticism when it comes to systemd alot. What ecosystem is it really breaking? It’s all still open source, there aren’t any proprietary protocols or corporate patents that prevent people from modifying the software to not have to rely on systemd. This “threat”, thr way I see it, has turned out to be at most a “ minor inconvenience “.

                                                                      I suppose you’re thinking about examples like GNOME, but on the one hand, GNOME isn’t a unix-dogmatist project, but instead they aim to create a integrated desktop experience, consciously trading this in for ideal modularity – and on the other, projects like OpenBSD have managed to strip out what required systemd and have a working desktop environment. Most other examples, of which I know, have a similar pattern.

                                                        2. 6

                                                          I think that the problem is fanboyism, echo chambers and ideologies.

                                                          I might be wrong, so please don’t consider this an accusation. But you writing this sounds like someone hearing that systemd is bad, therefore never looking at it, yet copying it. Then one tries it and finding out that baseless prejudices were in fact baseless.

                                                          After that the assumption is that everyone else must have been doing the same and one is enlightened now to see it’s actually really cool.

                                                          I think that this group behavior and blindly copying opinions is one of the worst things in IT these days, even though of course it’s not limited to this field.

                                                          A lot of people criticizing systemd actually looked at systemd, really deep, maybe even built stuff on it, or at least worked with it in production as sysadmin/devop/sre/…

                                                          Yes, I have used systemd, yes I understand why decisions we’re taken, where authors if the software were going, read specs of the various parts (journald for example), etc.

                                                          I think I have a pretty good understanding compared to at least most people that only saw it from a users perspective (considering writing unit files to be users perspective as well).

                                                          So I could write about that in my CV and be happy that I can answer a lot of questions regarding systemd, advocate its usage to create more demand and be happy.

                                                          To sum it up: I still consider systemd to be bad on multiple layers, both implementation and some ideas that I considered great but then through using it seeing that it was a wrong assumption. By the way that’s the thing I would not blame anyone for. It’s good that stuff gets tried, that’s how research works. It’s not the first and not the last project that will come out sounding good, to only find out a lot of things either doesn’t make a difference or make it worse.

                                                          I am a critic of systemd but I agree that there’s a lot of FUD as well. Especially when there’s people that blame everything, including own incompetence on systemd. Nobody should ever expect a new project to be a magic bullet. That’s just dumb and I would never blame systemd for trying a different approach or for not being perfect. However I think it has problems on many levels. While I think the implementation isn’t really good that’s something that can be fixed. However I think some parts of the concept level are either pretty bad or have turned out to be bad decisions.

                                                          I was very aware that especially in the beginning the implementation was bad. A lot got better. That’s to be expected. However next to various design decisions I consider bad I think many more were based on ideas that I think to most people in IT sound good and reasonable but in the specific scenarios that systemd is used it at least in my experience do not work out at all or only work well in very basic cases.

                                                          In other words the cases where other solutions are working maybe not optimal, but that aren’t considered a problem worth fixing because the added complexity isn’t worth it systemd really shines. However when something is more complex I think using systemd frequently turns out to be an even worse solution.

                                                          While I don’t wanna go into detail because I don’t think this is the right format for an actual analysis I think systemd in this field a lot in common with both configuration management and JavaScript frameworks. They tend to be amazing for use cases that are simple (todo applications for example), but together with various other complexities often make stuff unnecessarily complicated.

                                                          And just like with JavaScript frameworks and configuration management there’s a lot of FUD, ideologies, echochambers, following the opinion of some thought leaders, and very little building your own solid opinion.

                                                          Long story short. If you criticize something without knowing what it is about then yes that’s dumb and likely FUD. However assuming that’s the only possible reason for someone criticizing software is similarly dumb and often FUD regarding this opinion.

                                                          This by the way also works the reverse. I frequently see people liking software and echoing favorable statements for the same reasons. Not understanding what they say, just copying sentences of opinion leaders, etc.

                                                          It’s the same pattern, just the reversal, positive instead of negative.

                                                          The problem isn’t someone disliking or liking something, but that opinions and thoughts are repeated without understanding which makes it hard to have discussions and arguments that give both sides any valuable insides or learnings

                                                          Then things also get personal. People hate on Poetteing and think he is dumb and Poetteing thinks every critic is dumb. Just because that’s a lot of what you see when every statement is blindly echoed.

                                                          1. 1

                                                            That’s nice, but the implication of the anti-systemd chorus was that sys v init was good enough. Not all of these other “reasonable objections” that people are breathless to mention.

                                                            The timbre reminded me of people who say autotools is preferrable to cmake. People making a lot of noise about irrelevant details and ignoring the net gain.

                                                            But you writing this sounds like someone hearing that systemd is bad, therefore never looking at it, yet copying it.

                                                            No, I’m reacting to the idea that the systemd controversy took up any space in my mind at all. It’s good software. It doesn’t matter if X or Y is technically better, the popular narrative was that systemd is a negative thing, a net-loss.

                                                            1. 2

                                                              In your opinion it’s good software and you summed up the “anti-systemd camp” with “sys v init was good enough” even though people from said “anti-systemd camp” on this very thread disagreed that that was their point.

                                                              To give you an entirely different point of view, I’m surprised you don’t want to know anything about a key piece of a flagship server operating systems (taking that one distro is technically an OS) affecting the entire eco system and unrelated OS’ (BSDs etc.) that majorly affects administration and development on Linux-based systems. Especially when people have said there are clear technical reasons for disliking the major change and forced compliance with “the new way”.

                                                              1. 2

                                                                you summed up the “anti-systemd camp” with “sys v init was good enough” even though people from said “anti-systemd camp” on this very thread disagreed that that was their point.

                                                                Even in this very thread no one has actually named a preferred alternative. I suspect they don’t want to be dragged into a discussion of details :)

                                                                affecting the entire eco system and unrelated OS’ (BSDs etc.)

                                                                BSDs would be a great forum for demonstrating the alternatives to systemd.

                                                                1. 2

                                                                  Well, considering how many features that suite of software has picked up, there isn’t currently one so that shortens the conversation :)

                                                                  launchd is sort of a UNIX alternative too, but it’s currently running only on MacOS and it recently went closed source.

                                                          2. 3

                                                            It violates unix philosohpy

                                                            That accusation was also made against neovim. The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about.

                                                            i don’t follow your reasoning. why is it relevant that people also think neovim violates the unix philosophy? are you saying that neovim conforms to the unix philosophy, and therefore people who say it doesn’t must not know what they’re talking about?

                                                            1. 1

                                                              are you saying that neovim conforms to the unix philosophy, and therefore people who say it doesn’t must not know what they’re talking about?

                                                              When the implication is that Vim better aligns with the unix philosophy, yes, anyone who avers that doesn’t know what they’re talking about. “Unix philosophy” was never a goal of Vim (”:help design-not” was strongly worded to that effect until last year, but it was never true anyways) and shows a deep lack of familiarity with Vim’s features.

                                                              Some people likewise speak of a mythical “Vim way” which again means basically nothing. But that’s a different topic.

                                                              1. 1

                                                                vim does have fewer features which can be handled by other tools though right? not that vim is particularly unixy, but we’re talking degrees

                                                            2. 1

                                                              The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about

                                                              I’ll bookmark this comment just for this description.

                                                            1. 4

                                                              After spending a few months with Forth earlier this year, I absolutely agree that Forth can be extraordinarily simple and compact, that mainstream software is an endless brittle tower of abstractions, and that the aha moment when you find the right abstractions of a problem can be transcendent. But writings like this also indicate limitations that the Forth community unquestioningly accepts.

                                                              Forth is quite “individualistic”, tailored to single persons or small groups of programmers.. nobody says that code can’t be shared, that one should not learn to understand other people’s code or that designs should be hoarded by lone rangers. But we should understand that in many cases it is a single programmer or very small team that does the main design and implementation work.

                                                              This is fine. However, the next step is not:

                                                              once it becomes infeasible for a single person to rewrite the core functionality from scratch, it is dead. The ideal is: you write it, you understand it, you maintain and change it, you rewrite it as often as necessary, you throw it away and do something else.

                                                              I’ll suggest an alternative meaning of “dead”: when it stops being used. By this definition, most Forth programs are dead. (duck) More seriously, it is abuse of privilege to claim some software is dead just because it’s hard to modify. If people are using it, it is providing value.

                                                              It is the fundamental property and fate of all software to outlive its creator. The mainstream approach, for all of its many problems, allows software to continue to serve its users long after the original authors leave the scene. They decay, yes, but in some halting, limping fashion they continue to work for a long time. It’s worth acknowledging the value of this longevity. Any serious attempt to replace mainstream software must design for longevity. That requires improving on our ability to comprehend each other’s creations. And Forth (just like Scheme and Prolog) doesn’t really improve things much here. Even though ‘understanding’ is mentioned above, it is in passing and clearly not a priority. Even insightful Forth programs can take long periods of work to appreciate. If a tree has value in the forest but nobody can appreciate it, does it really have value? I believe comprehensibility is the last missing piece that will help Forth take over the world. Though it may have to change beyond recognition in the process.

                                                              (This comment further develops themes I wrote about earlier this year. Lately I’ve been working on more ergonomic machine code, adding only the minimum syntax necessary to improve checking of programs, adding guardrails to help other programmers comprehend somebody else’s creation. Extremely rudimentary, highly speculative, very much experimental.)

                                                              1. 5

                                                                I’ll suggest an alternative meaning of “dead”: when it stops being used. By this definition, most Forth programs are dead. (duck) More seriously, it is abuse of privilege to claim some software is dead just because it’s hard to modify. If people are using it, it is providing value.

                                                                We ought to distinguish dead-like-a-tree from dead-like-a-duck. A dead tree still stands there and you can probably even put a swing on it & use it for another 15-20 years, but it’s no longer changing in response to the weather. A dead duck isn’t useful for much of anything, and if you don’t eat it real quick or otherwise get rid of it, it’ll liable to stink up the whole place.

                                                                A piece of code that is actively used but no longer actively developed is dead-like-a-tree: it’s more or less safe but it has no capacity for regeneration or new growth, and if you make a hole in it, that hole is permanent. Once the termites come (once it ceases to fit current requirements or a major vulnerability is discovered) it becomes dead-like-a-duck: useless at best and probably also a liability.

                                                              1. 7

                                                                The other minor frustration I have with Factor is the fact that any file that does much work tends to accrete quite a few imports in its USING: line (the calculator I wrote has 23 imports across 4 lines, even though it’s only 100 lines of code). This is mostly due to a lot of various Factor systems being split into quite a few vocabularies. I could see this being helpful with compilation

                                                                Chatting with @yumaikas about this, we ended up working on a little Vim keyboard macro that lets us type out a module name anywhere in a file and move it into the USING: block at the top. The version I ended up with:

                                                                " add word at cursor to final line of imports
                                                                noremap <buffer> <Leader>i diWmz?^USING:<CR>/^;/-1<CR>$a<Space><Esc>p'z
                                                                " add word at cursor to new line of imports (while preserving existing indentation; that's what the '%<Backspace>' is for)
                                                                noremap <buffer> <Leader>I diWmz?^USING:<CR>/^;/-1<CR>o%<Backspace><Esc>p'z
                                                                
                                                                1. 8

                                                                  Speaking as a C programmer, this is a great tour of all the worst parts of C. No destructors, no generics, the preprocessor, conditional compilation, check, check, check. It just needs a section on autoconf to round things out.

                                                                  It is often easier, and even more correct, to just create a macro which repeats the code for you.

                                                                  A macro can be more correct?! This is new to me.

                                                                  Perhaps the overhead of the abstract structure is also unacceptable..

                                                                  Number of times this is likely to happen to you: exactly zero.

                                                                  C function signatures are simple and easy to understand.

                                                                  It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it, so makes certain lifetime expectations on it. Not one single piece of documentation I’ve seen in the last 5 years mentions this fact.

                                                                  1. 4

                                                                    It once took me 3 months of noodling on a simple http server to realize that bind() saves the pointer you pass into it

                                                                    Which system? I’m pretty sure OpenBSD doesn’t.

                                                                    https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L200

                                                                    https://github.com/openbsd/src/blob/4a4dc3ea4c4158dccd297c17b5ac5a6ff2af5515/sys/kern/uipc_syscalls.c#L1156

                                                                    1. 2

                                                                      Linux (that’s the manpage I linked to above). This was before I discovered OpenBSD.

                                                                      Edit: I may be misremembering and maybe it was connect() that was the problem. It too seems fine on OpenBSD. Here’s my original eureka moment from 2011: https://github.com/akkartik/wart/commit/43366d75fbfe1. I know it’s not specific to that project because @smalina and I tried it again with a simple C program in 2016. Again on Linux.

                                                                        1. 1

                                                                          Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                                                          I’ll dig up a simple test program later today.

                                                                          1. 2

                                                                            Notice that I didn’t implicate the kernel in my original comment, I responded to a statement about C signatures. We’d need to dig into libc for this, I think.

                                                                            bind and connect are syscalls, libc would only have a stub doing the syscall if anything at all since they are not part of the standard library.

                                                                    2. 2

                                                                      Perhaps the overhead of the abstract structure is also unacceptable..

                                                                      Number of times this is likely to happen to you: exactly zero.

                                                                      I have to worry about my embedded C code being too big for the stack as it is.

                                                                      1. 1

                                                                        Certainly. But is the author concerned with embedded programming? He seems to be speaking of “systems programming” in general.

                                                                        Also, I interpreted that section as being about time overhead (since he’s talking about the optimizer eliminating it). Even in embedded situations, have you lately found the time overheads concerning?

                                                                        1. 5

                                                                          I work with 8-bit AVR MCUs. I often found myself having to cut corners and avoid certain abstractions, because that would have resulted either in larger or slower binaries, or would have used significantly more RAM. On an Atmega32U4, resources are very limited.

                                                                      2. 1

                                                                        Perhaps the overhead of the abstract structure is also unacceptable..

                                                                        Number of times this is likely to happen to you: exactly zero.

                                                                        Many times, actually. I see FSM_TIME. Hmm … seconds? Milliseconds? No indication of the unit. And what is FSM_TIME? Oh … it’s SYS_TIME. How cute. How is that defined? Oh, it depends upon operating system and the program being compiled. Lovely abstraction there. And I’m still trying to figure out the whole FSM abstraction (which stands for “Finite State Machine”). It’s bad enough to see a function written as:

                                                                        static FSM_STATE(state_foobar)
                                                                        {
                                                                        ...
                                                                        }
                                                                        

                                                                        and then wondering where the hell the variable context is defined! (a clue—it’s in the FSM_STATE() macro).

                                                                        And that bind() issue is really puzzling, since that haven’t been my experience at all, and I work with Linux, Solaris, and Mac OS-X currently.

                                                                        1. 1

                                                                          I agree that excessive abstractions can hinder understanding. I’ve said this before myself: https://news.ycombinator.com/item?id=13570092. But OP is talking about performance overhead.

                                                                          I’m still trying to reproduce the bind() issue. Of course when I want it to fail it doesn’t.

                                                                      1. 1

                                                                        @PuercoPop you got me to try to play with vc-annotate-file, but when I try to hit something simple like C-x v = it says “File ___ is not under version control” Strange that it doesn’t prompt me to configure it. Any idea where it may be caching this stuff? I have no .emacs (normally a *Vim user).

                                                                        1. 1

                                                                          With no configuration, you should go the file in question, then enter C-x v g

                                                                          C-x v g Display an annotated version of the current file: for each line, show the latest revision in which it was modified (vc-annotate).

                                                                          From there you can navigate the history with ease using the single letter actions listed later in the manual

                                                                          1. 1

                                                                            Thanks. My stock emacs on Mac was staler than I thought. Worked out of the box after an upgrade. (Folks on #lobsters helped me figure it out.)

                                                                          1. 6

                                                                            Also Sortix.

                                                                          1. 15

                                                                            The author wants to use the tool for tasks it is not designed for, or does not know how to use the tool correctly.

                                                                            Passing on “Distributed version control sucks for distributing software” which is just non-sense.

                                                                            1. Distributed version control sucks for distributed development

                                                                              The problem shows up when I’m sitting in my hotel room and need to re-create the local repository over the poor connection. Now I’m not just downloading the one revision I want to work on; I’m downloading every revision ever.

                                                                              Nothing forces you to do that in git. Just download the one revision you want to work on. This point reinforces also the idea that the author is certainly not working in the industry (academic setting) and seems to have no idea of necessary more advanced features used in developping complex systems collaboratively (with several products / features, in stable branches in which new features have to be backported).

                                                                            2. Distributed version control sucks for long-lived projects

                                                                              The history continues to grow; a single version doesn’t. This may be a smooth progression rather than a sudden state change: over time it becomes more the case that the history grows faster than the current version. And so a system that forces every copy to contain all of history will eventually, inevitably, have bigger copies than a system that only stores current versions.

                                                                              And then the author rants about DVCS sucking for archiving. And sees absolutely no contradiction in those two positions. If you forget your history because you only keep current versions, you are nor archiving anything. It is impossible to replicate a past version of a system, for historical purpose, exploration or just to help a user stuck with an old system.

                                                                            3. Distributed version control sucks for archiving

                                                                              Use a database.

                                                                            The author is just closed off in his own environment and has a poor grasp of the tools he is using on top of it. This rant is useless, and his peers were right to shut him off.

                                                                            1. 2

                                                                              Some times it’s easier to rant than to learn new things :p

                                                                              1. 1

                                                                                I didn’t like it either, but you’re strawmanning one criticism. I think OP’s claim is that a centralized repo is easier to archive because everything stays on a primary copy, with people subsetting into secondary copies. So it’s clear what to back up. There’s no contradiction there.

                                                                              1. 4

                                                                                Another flaw in addition to what other commenters have pointed out.

                                                                                Although these issues could be mitigated in theory, that is not done in practice.

                                                                                But then you’re just criticizing git and how it’s used in practice, right? Why over-generalize to all of distributed version control?


                                                                                The GitHub thing also makes no sense. Git makes no assumption about the master repo, but nobody claims you don’t need one. GitHub filling the gap is precisely what the system was designed to enable: separation of concerns.

                                                                                1. 4

                                                                                  The comments are great too:

                                                                                  “That second paragraph is labeled as a “Note” which means that it is non-normative (informational) and do not contain requirements so they are not binding on the implementation.”

                                                                                  “Interesting it looks like C89 did not have notes, but a lot of content was moved to notes in C99.”

                                                                                  1. 16

                                                                                    Thanks for reporting this. There is a bug tracking this https://bugzilla.mozilla.org/show_bug.cgi?id=1472948

                                                                                    Update: The offending extension has now been removed! Thanks to Mozilla for the speedy response.

                                                                                    1. 2

                                                                                      Hopefully they’re also hardening their review policies.

                                                                                      1. 2

                                                                                        I found some posts from around the time the “analytics” code was originally introduced, mentioning that it only applied to the Chrome version and not the Firefox one. I’d be surprised if this did actually make it through addons.mozilla.org’s review process.