Threads for sebastian

    1. 7

      There is literally no way I could remember even 10% of these… I have one alias, gst for git status because I type it a lot and the word status is annoying to type (mostly left hand). I’d be curious to know how many aliases people actually remember to use.

      1. 2

        I pretty much exclusively use git with aliases, all of which are g followed by two letters:

        It definitely depends on how used to git without aliases you are. I started using aliases from the very beginning so these are all part of my muscle memory, but it’s definitely not something I’d recommend to someone who’s already comfortable using git. Or at the very least, the aliases should be created incrementally for commands that you find yourself using frequently, not memorized all at once.

    2. 5

      I resonated with the part where you brought up how refusing to use Discord hurt your social life with no meaningful benefit. I too haven’t used Discord in, what, four years now? And I feel the effects of it. I’ve lost many friends and connections, I’m locked out of many communities that I otherwise would love to be a part of, and maintaining relationships is significantly more difficult than it would be otherwise. Part of me wants to say that I don’t regret it, since I’m standing up for my values, but in a much more real sense I kind of do regret deleting Discord, and the only reason I continue to not use it is because I’ve already committed to not using it for so long (sunk cost fallacy, I suppose).

      You’re right; most people simply don’t care about the free software movement or privacy, and their lives are easier because of it. I can’t use a phone app to do my laundry on campus since my phone doesn’t have Google Play Sevices, which the app requires. I have to lug around quarters and use those instead. In a perfect world, this wouldn’t be an issue, or even if it was it would be a cause with significant momentum behind it, causing it to be something worth fighting for. I struggle to see how free software can win now, as difficult as it is to admit.

      1. 6

        Part of me wants to say that I don’t regret it, since I’m standing up for my values, but in a much more real sense I kind of do regret deleting Discord, and the only reason I continue to not use it is because I’ve already committed to not using it for so long (sunk cost fallacy, I suppose).

        It sounds like software freedom is something you have as a value but are treating as a principle. A value is something like “free software is good and should be encouraged”, a principle is something like “I should not use unfree software.” It’s okay to do things that aren’t aligned with your values if it makes your life meaningfully better, and it puts you in a better position to align the rest of society with your values. Doing things that are against your principles is morally wrong.

        What I’m saying is: do you think worse of people who use a laundry app? If not, maybe it’s okay for you to use it too and save your time and mental energy fighting for things that actually matter to free software at large.

    3. 2

      Am I missing something, or do none of your layers include a semicolon?

      Even on days where my programming is python-only, I’m bound to use one or two in prose. When my programming is not python-only, I frequently need at least one per line. How do you avoid them?

      1. 2

        The semicolon is on the top right of the base layer.

        1. 1

          Bah. I just did not read that as a semicolon on the diagram. Thanks.

    4. 17

      This is a nit not actually related to the content (which was very interesting!) but… I wish this author did not use “obviously” so much. “Everybody knows that JPEGS are 8x8…” actually, no. Even among programmers that’s relatively specialized knowledge, not to mention non-technical folks.

      Again, this was actually very interesting, and I’m glad to have learned something! But it made me feel ignorant instead of excited to learn, and that’s a bummer in my book.

      1. 5

        These tweets would be far more interesting to people with passing knowledge of image formats, such as myself, if they included the “obvious” things that “everyone” already knows. Because I don’t!

        1. 1

          Hah, I thought you were going to link to Ten Thousand. But that one’s good too :P

      2. 1

        Interesting, the same line made me go “huh, that makes some sense” and I moved on.

    5. 2

      I am increasingly interested in “improved C languages” [1], and I like the direction this one is taking.

      It is still in early design state, but it seems very interesting (to me).

      [1]: I once saw a video in which @andrewrk told his motivation to create Zig was, if I recall correctly, “to fiddle with C so it could be improved by removal of the weird (and undefined) behaviours”. That message has hit me hard, and since then I have been thinking about (and in search of) it.

      1. 1

        Was this the video you were thinking of?

        1. 1

          No, it wasn’t. It was this one.

    6. 4

      There’s not enough information in the readme to understand what services it’s using without digging through the code. I’ve never heard of SimplyTranslate; would love an alternative to Google Translate. Their website seems like it’s all paid services, although there is this Firefox extension:

      Is there a web interface similar to

      Invidious and Nitter are obvious (I was running my own Invidious, but it was slow, had frequent timeouts and with your subscriptions easy to download/upload, switching to a public instance like makes a lot of sense).

      I’m guessing for search redirects it’s using Duckduckgo or a Searx instance. What is it using for Reddit? Is there a nitter like interface for Reddit?

      1. 2

        There’s not enough information in the readme to understand what services it’s using without digging through the code.

        This extension is a fork of Privacy Redirect, which has a lot more information in its README.

    7. 7

      I decided not to tag this as c or dotnet, because the main focus of this video is on how the UNIX philosophy implements modern features and practices in a very different, much simpler way than is typical today. The use of C# as the “modern” example isn’t really the main focus, and the described features of C arise from following the UNIX philosophy by glueing together small confined processes using shell scripts, so the behavior isn’t really limited to just C (though C was designed with this idea in mind moreso than most other languages, hence the plt tag).

    8. 5

      Wait, there’s WebDAV based email?

      1. 5

        This K-9 mail documentation page on configuring incoming WebDAV server settings seems to shed some light on this. It looks like Microsoft Exchange servers pre-2010 supported WebDAV email, but the feature has been deprecated ever since.

        1. 2

          Huh, that must be the “HTTP” option for main server types I saw in old Outlook/OE back in the day. Exchange has supported EAS since 2003 though, so it’d be an odd choice to support.

          Well, there’s also EWS, which is still supported and used by Mac Outlook and Evolution, so…

    9. 5

      Oh, and let’s not forgot “long long” could mean 72-bit on your 9-bit byte system, of course. That’s why stdint.h exists…. except, oh, C89.

      That’s a very important point that this post makes that seems to be overlooked. Simply put, it means that if your program depends on an integer with a known size, it is impossible to correctly write said code in pure C89. Whenever I write C, I always assume that headers like stdint.h are available, even if they technically aren’t compliant with C89.

    10. 4

      Why use C89 in 2021? That’s a 32 year old standard. We have C17 now.

      1. 9

        Especially with C89 you have a huge variety of compilers available allowing you to run your code on nearly every architecture as well as checking your code for maximum standard compliance (some compilers are more liberal than others).

        With any C standard that is >= C99 you are effectively forced using clang or gcc.

        1. 4

          Can you give an example of an architecture that is only supported by a C89 compiler?

          1. 3

            MS VisualC++ only began to add C99 support in Visual Studio 2013, and I’m not sure they support anything newer. So you’re no longer limited to C89 for Windows code these days, but there’s a long tradition of “keep your expectations very low if you want to write C and be portable to the most popular desktop OS”.

            1. 3

              According to this blog post they were working on C11 and C17 support last year. I don’t know how far they are with things they listed as missing.

      2. 4

        Later versions of the C standard are a lot less portable and a lot more complex. I use C89 when I want to write software that I know will be portable across many different machines and will work with nearly any C compiler. IMO, it doesn’t really make sense to target anything later than C11; C17 doesn’t make enough notable and useful changes to warrant using it.

      3. 1

        Some old code bases, especially in the embedded space, are still written for it.

    11. 2

      Const is such a useful concept in C++ but nobody is seriously suggesting changing the default now. If I was put in charge of designing a language today I would break it down like this:

      • local variables - not const by default. They are variables - they vary.

      • function parameters - const by default (references and pointers could still point to non-const data). Maybe even always const, it simplifies ownership rules.

      • globals/statics - const by default

      • instance members - const by default except maybe in structs

      • instance methods - const by default

      1. 6

        local variables - not const by default. They are variables - they vary.

        FWIW there are a couple reasons why I disagree with this:

        A lot of times, variables don’t vary, they’re just used to hold intermediate computations:

        let foo = bar.baz.quux(stuff + other(73));
        let another = foo.something(foo.other, foo.yet_another);
        do_something_with(another, and_another);

        Clearly, nothing varies here.

        Second, in mathematics, they’re still called variables even though they don’t vary: something like x = f(x) means that x is defined as a fixed-point of x, rather than what it means in programming. (Haskell and similar languages take this approach.)

        1. 2

          Second, in mathematics, they’re still called variables even though they don’t vary

          I was being flippant but I still think that (in general) local variables should be mutable by default. The alternative is sometimes creating a bunch of named variables for little reason when all you wanted to do was combine them.

          As an aside, I find it fascinating that people educated in different disciplines have very different ideas on what makes clear code. I come from a computer science background with a history of low level C++ development but currently work with a lot of younger people from mathematical and physics training who are honestly a lot smarter than I am.

          They think nothing of storing data in unnamed tuples but are usually very good about const-correctness. Whereas I write much more verbose variable names but only use const where I think it is needed.

      2. 1

        local variables - not const by default

        I think a better solution here is to not have anything be the default: make the programmer be explicit about what they want (i.e. using const or something like var or mut, rather than just having a default if neither is specified). In my experience, this also has the added benefit of making the programmer more aware of when things should actually be constant, when maybe they wouldn’t have thought about it otherwise.

    12. 2

      What’s the story with regards to delivery to major providers and self hosting email these days?

      1. 3

        I can’t speak for others, but I’ve been self hosting my email for a few months now, with no delivery problems whatsoever. Configuring the server to get messages to actually send and not end up in spam was difficult, but once it was done, things basically worked without issue.

        1. 3

          you don’t always know when mails you sent end up in the recipient’s spam folder.

          1. 2

            Or when Gmail silently throws them away with no error or warning. Don’t even show in spam. Just <poof> and … gone.

        2. 2

          I’ve had the same experience running Maddy on Hetzner.

        3. 1

          Considering how much of my life depends on a working email address and seeing all the horror stories of Gmail blocking accounts apparently for no good reason and with limited ability to appeal, I’m seriously considering hosting email myself too.

          Could you elaborate what were steps you needed to do for outgoing messages not to end up in the receiver’s spam folder?

          1. 2

            It mostly came down to properly setting up the DNS records for authentication. The Arch Wiki page for setting up a mail server is a very useful resource for this. For the most part, setting things up was just trial and error, troubleshooting until things worked. In my experience, hosting email actually isn’t that difficult. It’s definitely not easy, but if you know what you’re doing, it’s definitely doable, and it’s far from the hardest thing I’ve hosted in the past. Of course, this is just my experience, and obviously others have had experiences that differ a lot from mine, so definitely do your research before committing. Especially if you’re extremely dependent on having a working email address, it may be safer to either keep your current email, or just migrate it to a service other than Gmail if you feel uncomfortable with them.

            One other thing to note is that if you’re using a VPS, make sure your VPS provider actually allows you to self host email. Many VPS providers block crucial ports such as port 25. The process for getting these port(s) unblocked for your server differs from host to host; some don’t let you unblock it at all, for others you just have to open a support ticket and request it.

    13. 3

      I do not use a debugger … Stepping through code is a slow, tiring process.

      I think this says more about the state of debuggers than anything else.

      I never use code completion. If the editor provides it, I disable it.

      I agree that code completion that creates a pop-up that blocks other code on your screen and continuously updates in real time is annoying, and I never have that enabled; I find it to be incredibly distracting. However, simply having the option to auto-complete by pressing (for instance) the tab key is very useful, and I find it strange that you’d want to go out of your way to disable that, especially when working in a large project.

      However, I often spend a lot of time reading a project’s documentation. If the documentation is missing, I might read the source code instead.

      The whole point of features like tab completion is to make this easier on the programmer, and to reduce cognitive load. Documentation is obviously very useful, but often times using other features like tab completion or LSP hovering can save a lot of time by cutting out the stuff you don’t need.

      1. 3

        The whole point of features like tab completion is to make this easier on the programmer, and to reduce cognitive load. Documentation is obviously very useful, but often times using other features like tab completion or LSP hovering can save a lot of time by cutting out the stuff you don’t need.

        This is precisely the right way to look at tab completion. That said, there are alternatives to it that make it possible to live without it. A well-written grep or ack command in the source directory can often be as useful, and occasionally even more useful.

        The problem is that such a command is rarely tied into the editor by default. I think Plan 9’s plumbing system offers a very elegant solution by making it possible to perform actions on selected text based on the contents of the selection, regardless of editor support.

        On UNIX, I have a Perl script called dwim, which does approximately the same thing. If I select (in any program) a function call like function(arg, ... and press Alt-Q, dwim runs the equivalent of

        $ cd source-dir
        $ grep -n '^function(' *.c | head -1
        sourcefile.c:34:function(int argument) {
        $ xterm -e vi +34 sourcefile.c

        … opening vi at the definition of function in a new xterm window.

        1. 1

          The value of autocompletion is, I suspect, tied very closely to the quality of the API that you’re working with. If your naming conventions are such that you can always make a pretty good guess about the name of the API, then autocompletion is fantastic. If they are inconsistent then you probably need to go and look things up in the docs for any API that you haven’t used recently.

    14. 3

      Even after following the link I still don’t understand that reference, it sounds like is about not being a nazi apologist. In any case, I agree, is terrible wording for a reason.

      Also, the term “nazi” has been thrown very freely in the USA in the later years so is not clear what is referred to, aside from Adolf followers. White supremacist? Any other supremacists framework based on race/religion/country/etc? Trump supporters? Harry Potter fans?

      1. 8

        Generally, when people get banned with that ban-reason, they say some shit like “Gotta hand it to the Nazis. They were pretty good at X”. There is really no good reason to bring them up in the conversations we have here, and it only makes you look like a nazi-apologist asshat. They are nearly always in bad taste and mostly flame bait.

        The ban messages are not really a reference to that tweet as much as just a response to the shit the commenter said. There is nothing to analyze here. The user was just being an asshat.

        1. 9

          But the concern here is that we don’t necessarily know exactly what the comment said. You’re making an assumption about the comment that got the user banned, and although it’s a quite reasonable assumption (there are very few instances where bringing up Nazis on Lobsters is ever necessary or justifiable), there still should be a way to trace back the original text, even if for no other reason than to hold the mods accountable.

          EDIT: This isn’t related to the comment that you were replying to, this is just about the full discussion here. I disagree with the comment you replied to, in that I don’t think it’s particularly relevant what “Nazi” referred to in the comment, nor do I think discussing it is relevant to this discussion. But that’s not a valid argument against more transparent moderation IMO.

          1. 5

            Comments get removed all the time. Entire threads are pruned for being off topic and inciting flamewars. In those cases everyone’s comments are removed - “good” or “bad”.

            Again, no-one is banned for a single comment. They’re banned for being a net negative to the site.

            1. 4

              Yep. I’ve had comments of mine removed for falling for off-topic-bait. It’s fine.

      2. 7

        Also, the term “nazi” has been thrown very freely in the USA in the later years so is not clear what is referred to, aside from Adolf followers. White supremacist? Any other supremacists framework based on race/religion/country/etc? Trump supporters? Harry Potter fans?

        It’s come to mean, more or less, “someone we don’t like”. It’s a less sophisticated version of calling someone “divisive” or their behaviour “inappropriate”.

        It also creates a very real problem, in that actual NAZIs are still a thing, and it makes it harder to call them out.

        As Kirsten Dipietra (and many others) put it:

        Referring to Trump as a Nazi not only undermines any legitimate argument against the president, but distracts from the actual concerns of neo-Nazis and their recent prominence.

        I might be wrong but I suspect that the term neo-Nazi is only really useful if you’ve already burned out the term Nazi by applying it to all and sundry.

        1. 4

          This is how I once got embroiled in a discussion about whether or not the literal leader of The American Nazi Party was a Nazi or not (question for context).

      3. 9

        It’s a great system. Everybody calls everybody else nazis constantly, and then whoever is in power in the end can simply say “Dave over here compared car park attendants to nazis, which is now anti-semitic, hence making him a nazi, and banned”

    15. 20

      I agree with the criticisms directed at C++, but the arguments made in favor for C are weak at best IMO. It basically boils down to C code being shorter than equivalent code in competing languages (such as Rust), and C being more powerful and giving more tools to the programmer. I disagree strongly with the second point: unless you’re trying to write obfuscated code in C (which admittedly is quite fun to do), the features C has that make it supposedly more “powerful” are effectively foot-guns that can be used to write reliable code, not the other way around. C’s largest design flaw is that it not only allows for “clever” unsafe code to be written, but it actively encourages it. By design, it’s easier to write unsafe C code than it is to properly handle all edge-cases, and C’s design also makes it incredibly difficult to spot these abuses. In plenty of cases, C seemingly encourages the abuse of undefined behavior, because it looks cleaner than the alternative of writing actually correct code. C is a language I still honestly quite like, but it is a deeply flawed language, and we need to acknowledge the language’s shortcomings, rather than just pretend they don’t exist or try to defend what isn’t defensible.

      C is called portable assembly language for a reason, and I like it because of that reason.

      C is a higher-level version of the PDP-11’s assembly language. C still thinks that every computer works just like the PDP-11 did, and the result is that the language really isn’t as low level as some believe it is.

      1. 3

        In plenty of cases, C seemingly encourages the abuse of undefined behavior, because it looks cleaner than the alternative of writing actually correct code.

        In some of those cases, this is because the standard botched their priorities. Specifically, undefined behaviour of signed integer overflow: the clean way to check for signed overflow is to perform the addition or whatever, then check whether the result is negative or something. In C, that’s also the incorrect way, because overflow is undefined, despite the entire planet being 2’s complement.

        Would that make optimisations harder? I doubt it matters for many real world programs though. And even if it does: perhaps we should have a better for loop, with, say, an immutable index?

        1. 2

          In my opinion, Zig handles overflow/underflow in a much better way. In Zig, overflow is normally undefined (though it’s caught in debug/safe builds), but the programmer can explicitly use +%, -%, or *% to do operations with defined behavior on overflow, or use a built-in function like @addWithOverflow to perform addition and get a value returned back indicating whether or not overflow occurred. This allows for clean and correct checking for overflow, while also keeping the optimizations currently in place that rely on undefined behavior on overflow. All that being said, I would be curious to know how much of a performance impact said optimizations actually have on real code.

          1. 2

            Having such a simple alternative would work well indeed.

            I’m still sceptical about the optimisations to be honest. One example that was given to me was code that iterates with int, but compares with size_t, and the difference in width generated special cases that slows everything down. To which I thought “wait a minute, why is the loop index a signed integer to begin with?”. To be checked.

            1. 1

              To which I thought “wait a minute, why is the loop index a signed integer to begin with?”.

              Huh. I guess compilers really are written to compensate for us dumb programmers. What a world!

        2. 2

          despite the entire planet being 2’s complement

          Perhaps your planet is not Earth but the Univac 1100 / Clearpath Dorado series is 1’s complement, can still be purchased, and has a C compiler.

          1. 3

            And in my mind, it can stick with C89. Can you name one other 1’s complement machine still in active use with a C compiler? Or any sign-magnitude machines? I think specifying 2’s complement and no trap representation will bring C compilers more into alignment with reality [1].

            [1] I can’t prove it, but I suspect way over 99.9% of existing C code assumes a byte-addressable, 2’s complement machine with ASCII/UTF-8 character encoding [2].

            [2] C does not mandate the use of ASCII or UTF-8. That means that all existing C source code is not actually portable across every compiler because the character set is “implementation defined.” Hope you have your EBCDIC tables ready …

            1. 1

              Don’t forget that the execution character set can differ from the translation character set, so it’s perfectly fine to target an EBCDIC execution environment with ASCII (or even ISO646) sources.

          2. 1

            Well, I guess we’ll still have to deal with legacy code in some narrow niches. Banks will be banks.

            Outside of legacy though, let’s be honest: when was designed the last ISA that didn’t use 2’s complement? My bet would be no later than 1980. Likely even earlier. Heck, the fight was already over when the 4-bit 74181 ALU went out, in the late sixties.

            1. 1

              Oh yeah, I definitely keep these examples on file for when people tell me that all negative numbers are 2’s complement/all floating points are IEEE854/all bytes are 8-bit etc., but they point to a fundamental truth that C solves a lot of problems that “C replacements” don’t even attempt to do. C replacements are often “if we ignore a lot of things you can do in C, then my language is better”.

              1. 1

                Except I’m not even sure C does solve those problems. Its approach has always been to skip the problems, that with implementation defined and undefined behaviour. There’s simply no way to be portable and take advantage of the peculiarities of the machines. If you want to get close to the metal, well, you need a compiler and programs for that particular metal.

                In the mean time, the most common metal (almost to the point of hegemony) has 8-bit bytes, 2’s complement integers, and IEEE floating point numbers. Let’s address that first, and think about more exotic architecture later. Even if those exotic architectures do have their place, they’re probably exotic enough that they can’t really use your usual C code, and instead need custom code, perhaps even a custom compiler.

        3. 1

          I’ve always felt people over-react to the implementation defined behavior in the C standard.

          It’s undefined in the language spec, but in most cases (like 2’s complement overflow) it is defined by the platform and compiler. Clearly it’s better to have it defined by the standard, but it’s not necessarily a bad thing to delegate some behavior to the compiler and platform, and it’s almost never the completely arbitrary, impossible to predict behavior people make it out to be.

          It’s a pain for people trying to write code portable to every conceivable machine ever created, but let’s be realistic: most people aren’t doing that.

          1. 4

            Signed overflow is not implementation defined, it is undefined. Implementation-defined behaviour is fine. It requires that the implementer document the behaviour and deterministically do the same thing every time. Undefined behaviour allows the compiler to implement optimisations that are sound if they assume as an axiom that the behaviour cannot exist in any valid program. Some of these are completely insane: it is UB in C99 (I think they fixed this in C11) for a source file to not end with a newline character. This is because limitations of early versions of Lex/YACC.

          2. 1

            It’s undefined in the language spec, but in most cases (like 2’s complement overflow) it is defined by the platform and compiler

            It’s defined by the platform only. Compilers do tread that as “we are allowed to summon the nasal demons”. I’m not even kidding: serious vulnerabilities in the past have been caused by security checks being removed by the compilers, because their interpretation of undefined behaviour meant the security check was dead code.

            In the specific case of signed integer overflow, Clangs -fsanitize=undefined does warn you about the overflow being undefined. I have tested it. Signed integer overflow is not defined by the compiler. It just doesn’t notice most of the time. Optimisers are getting better and better though. Which is why you cannot, in 2021, confidently write C code that overflows signed integers, even on bog standard 2’s complement platforms. Even on freaking Intel x86-64 processors. The CPU can do it, but C will not let it.

            If the standard actually moved signed overflow to “implementation defined behaviour”, or even “implementation defined if the platform can do it, undefined on platform that trap or otherwise go bananas”, I would very happy. Except that’s not what the standard says. It just says “undefined”. While the intend was most probably to say “behave sensibly if the platform allows is, go bananas otherwise”, that’s not what the standard actually says. And compiler writers, in the name of optimisation, interpreted “undefined” in the most broad way possible: if something is undefined because one platform can’t handle it, it’s undefined for all platforms. And you can pry the affected optimisations from their cold dead hands.

            Or you can use -fwrapv. It’s not standard. It’s not quite C. It may not be available everywhere. There’s no guarantee, if you write a library, that your users will remember to use that option when they compile it. But at least it’s there.

            It’s a pain for people trying to write code portable to every conceivable machine ever created, but let’s be realistic: most people aren’t doing that.

            I am. You won’t find a single instance of undefined behaviour there. There is one instance of implementation defined behaviour (right shift of negative integers), but I don’t believe we can find a single platform in active use that does not propagate the sign bit in this case.

      2. 1

        Thou shalt foreswear, renounce, and abjure the vile heresy which claimeth that “All the world’s a VAX”, and have no commerce with the benighted heathens who cling to this barbarous belief, that the days of thy program may be long even though the days of thy current machine be short

        Whilst the world is not a VAX any more, neither is it an x86. Consider that your code may run on a PowerPC RISC-V, ARM, MIPS or any of the many other architectures supported by Linux. Some processors are big endian, others little. Some 32-bit and others 64. Most are single core but increasingly they are multi-core.

    16. 3

      This is very cool for those looking to try out and learn a little BCPL for historical reasons, but I don’t think the language has any actual viable use besides its historical notability. Especially as this is touted as the “young persons” guide to BCPL, it’s strange that there’s no mention of why you’d want to learn BCPL, or what it offers for new/young coders that other languages don’t. The only reason given is that its simplicity makes it easy to pick up and learn, which is also true of other scripting languages like Python[1] or Lua (more so, in my opinion). Seems like a very strange choice. Am I missing something?

      [1]: Obviously Python isn’t simple in its design or implementation, but it’s simple in that it’s easy for anyone to pick up and understand, for the most part.

      1. 3

        Seems like a very strange choice. Am I missing something?

        Martin Richards, the author, created BCPL in the 60s. That might be enough to explain why he believes that “BCPL is particularly easy to learn and is thus a good choice as a first programming language”.

        Edit: don’t get me wrong, it’s great if he still teaches programming to young people using the language he invented! Many of us have started with some dialect of BASIC, which isn’t exactly better.

        1. 2

          Yup, that’s what I missed, lol. Thanks, that makes a lot more sense.

      2. 2

        Yeah, the fact that this assumes a tabula rasa makes it a little awkward to read. I’m curious about BCPL due to its influence, but I’m having to skip past a lot of both Linux tutorial and platform-specific setup instructions to get to the meat of this.

      3. 2

        BCPL obviously isn’t a scripting language. It’s proto-C. It’s a kind of high level assembly language, even more so than C. For example there are no structs, only arrays with constant offset (for which you can use MANIFEST declarations). It’s designed to not have or need a linker – which means the programmer has to manage global storage themselves. The only thing that is not a very good fit to modern machines is it implicitly assumes word addressing, not byte addressing. But that’s not a huge problem. Like early Pascal it assumes that characters are stored in full machine words for processing and supplies special functions to pack and unpack them for more compact storage.

    17. 8

      This really puts into perspective how complex Vim really is. And yet, when using it, it never feels complex; it feels natural, simple, and intuitive, which just shows how well designed it is. The user can take advantage of so much, and switch between many different states, and it’s all presented in a way that rarely ever feels bloated or overwhelming.

      1. 6

        I didn’t learn vim until I joined a company where it was a mandatory part of the workflow. I’ve been using it now for about 18 months. I would say I’m decent with it.

        It definitely still feels complex. p can be used to paste something from the registers in normal mode, but in insert mode I have to use ctrl+r " to accomplish the same thing. ctrl+w is the motion for window-based actions in normal mode, but in insert mode it deletes the word before the cursor (an action I accidentally perform constantly). Any sense of intuition anybody feels about it is hard fought and earned from years of intentional, focused use on it.

        Just now, in the process of writing this post, I had to go look up the exact syntax for looking up exactly what ctrl+w did in insert mode. The syntax for that was apparently :help i_CTRL-W. I haven’t had much opportunity to dig into the documentation to try to learn more complex actions because whenever I do, I feel stymied by the crunchiness of it all. When the process of looking up how things work is overwhelming, I think it’s fair to say that the system is neither inherently intuitive nor particularly well-designed.

        I understand why people feel strongly about their enjoyment of vim or emacs, but I often wish I could just go back to VS Code.

        1. 9

          First, as someone who loves vim, someone being forced to use it sounds like torture, and a really silly strategy for a company, let devs use what they’re comfortable with!

          The help is, in my opinion, the best documentation of any tool I’ve ever used. There are some tricks to using it effectively though. In this case yes, you found the correct command to get to the right place in the help, but I find this shortcut much easier to remember/intuit: :h i^w. The i is the mode, and ^ indicates CTRL. Another example is :h ^w^J to find the help for the normal mode command to move a window to the bottom of the screen.

          In addition to the reference manual, which is what you get when you use the :help command to find particular commands/motions/options/regex tools etc., there’s also the user manual, which has a completely different style and can be read start-to-finish like a book. It’s very readable and is full of examples and in-depth descriptions of workflows. :h user-manual

          For an overview of the help system itself, :help helphelp is a good place to start.

          1. 3

            I appreciate this cool and useful post - and I will take it all to heart - but I do want to point out the unintentional hilarity that to learn more about the help system I have to scream help into the void over and over again.

            1. 2

              I love it. You can shorten it a bit but that’s one that I prefer to write out in full. It’d be nice if you could choose to use all caps…

              1. 1


                E478: Don’t panic!

        2. 4

          Yeah, the default bindings / syntax for Vim can be a bit awkward at times. I was moreso referring to how I never feel like I have to keep a mental map in my head of what mode(s) I’m currently in and how they relate to one another. I think some binds make more sense for some than others, because people naturally adapt differently to different workflows. Per your ctrl+w example, I’m the exact opposite, in that I regularly use ctrl+w in insert mode (in fact while typing this reply I tried pressing ctrl+w to delete a word and accidentally closed the browser tab :P ), but I have the window-management keys remapped to be quicker and easier to use.

          I disagree with your conclusion that you finding it difficult to look things up in the documentation shows that the system isn’t well designed. Perhaps not “intuitive”, though that was a miscommunication on my part, I meant intuitive as in once you know how to do things and it’s in your muscle memory it feels natural. Once you learn the general syntax for the help pages, looking stuff up is relatively straight-forward, and the jump stack can be used to go back and forth between pages (C-o to pop/go back, C-i to push/go forward). Once again though, everyone thinks and works differently, and the solution that is easy and intuitive for one may be cumbersome and overwhelming for another.

        3. 2

          I barely even think of vim as being modal anymore since I just use it as a sequence of commands. For example, if I want to insert the word “foo” under the cursor, I’ll hit “ifoo”. So the mental process is not “press i switch to insert mode and do a bunch of stuff then press esc to switch to the other mode”, but rather just “do an insert of this text, terminated by the esc key”.

          This way, there’s no thinking about mode - it is just always doing commands - and it helps understand exactly what operations like undo and repeat last command will actually do to the document.

          This mental model also helps compose things better… and avoids bothering with the other weird commands like ctrl+w (which i didn’t even know was a thing until you mentioned it lol).

      2. 3

        It helps that most people don’t use most of these modes: I’ve been using Vim for 10 years and don’t think I’ve ever even needed to know about select mode.

      3. 2

        This really puts into perspective how complex Vim really is. And yet, when using it, it never feels complex; it feels natural, simple, and intuitive, which just shows how well designed it is.

        Oof, I have to disagree hard here. Vim has a lot of good things to offer, but natural, simple and intuitive are definitely not on that list. Vim is incredibly complex and clunky. The good ideas behind it could have been implemented in a much more elegant way.

        After using Neovim for five years I got so fed up with it that I even started a project to write my own editor. But fortunately I had not spend much time on it before I discovered Kakoune. (Actually, that was not luck. I discovered it when doing some research for my own project.) I am still in the transitioning phase, but so far I really like it.

    18. 7

      I’d hesitate to call any of this stuff “useless”. It seems more accurate to say that I (and the author) prefer making stuff for fun, without any profit incentives or obligations, i.e. making things for fun because we find it enjoyable. Enjoyable to work on != useless, even if no one else finds any use for them.

      1. 6

        I got the impression by “useless” the author means “has no intended use-value beyond it’s creation”. Which is sort of a specific case of the general term, but it’s one commonly found on forums where GitHub stars are seen as valuable.

      2. 3

        Adding to your comment, just sharing the notes and code online might make it useful. From there, they might respond to questions or requests. They might also leave a notice it’s a read-only dump they did for fun that they won’t take comments or requests on. Sharing it was the good deed. Others might respond to them. The author made it useful by putting commented source on Github under MIT license. Maybe also by making some interesting stuff if one uses that metric.

        The next level is the tooling they use. We know most F/OSS is made just scratching an itch. Someone needs or wants something to exist. They make it. If they use common tooling, then sharing that can go from useless to helpful to other people. More interesting stuff might be made from there. Common with C++/C#/Java, Python/Ruby/PHP, etc. If uncommon tooling, it might build up that ecosystem for the same purposes. We see that here with Myrddin and Zig. I think usefulness goes up more often with the common tooling, though.

        Extra conclusion: doing your “useless” project on a well-known or growing platform might effortlessly make both more useful.

    19. 19

      I personally quite disagree with the message in this article. The everything as a file view, parseable text as inter-command communication, and focus on “do one thing, but do it well” are strong design principls that still hold true, and I think contribute to the beauty of Unix. I find it particularly interesting that new tools are constantly coming in, like fzf, ranger, neovim or fish, which all improve the user experience while embracing these tried and true principles.

      1. 16

        I mostly agree with you, though I do think the article brings up some meaningful points. People who work with UNIX-like systems are used to dealing with a lot of its quirks and flaws (signals, tar, sh). Some of these decisions had sound reasons to exist when they were first conceived, but nowadays they are mostly a relic of the past that plague modern devices.

        The article does make a large jump from discussing quirks of UNIX, to bashing key UNIX concepts like command piping, without giving any actual examples as to how the latter is harmful. This degrades the message the author is trying to convey a little bit, but I still do think many of the examples that are provided still hold up today as being clunky and difficult to use. All in all, it seems reasonable to bash UNIX’s legacy without dismissing the UNIX philosophy as a whole.

        1. 2

          I suspect the majority of the argument for and against Unix is actually the same thing: it’s old.

          I have a question that I don’t have the knowledge/experience to answer myself, but have been wondering for the longest time:

          What would C look like, if it were designed today instead of the 1970s?

          Or to rephrase it (and possibly put C on a pedestal), suppose we created a language from first principles that mandated 1) a simple language that prioritizes explicitness over intelligent compiler magic, and 2) performance over safety, portability/etc?

          For instance, “the call stack” makes far more sense in a single-core world than in an 8-core world. Actually, I probably shouldn’t assume that C invented the call stack; maybe that’s inherited too.

          Either way, I think with today’s CPU pipeline parallelism, the semicolon wouldn’t define orderedness (so a = 2; a++; a*=2; could increment and double junk data before a is set to 2) and there would need to be a second symbol to actually say what is explicitly ordered. Or maybe you’d use {} for that? I haven’t properly thought about it to be honest.

          But, no way would instructions level parallelism be left up to the compiler to “intelligently” determine.

          That’s just one example. I’m sure a lot of other stuff would change too.

          1. 1

            The example you gave with C makes me think about functional programming and other declarative programming paradigms, where programming is done by describing the desired behavior, without side effects that mutate the program’s state. I would imagine that if C were invented today, it would lean more in that direction, though it’s difficult to imagine how that would look in the context of a systems programming language like C, which is intended to give programmers direct access to low-level details. This would also diminish the explicitness of the language, which is against one of the principles you brought up. I’ve never really thought about that, but that’s incredibly interesting!

            If we’re willing to sacrifice portability, a CPU instruction set designed from the ground up to work in ways compatible with this non-existent language may make things a bit simpler, though once again I’m not really sure what that would entail.

            Alternatively, maybe the language would just require that the programmer explicitly indicate which parts of the code can be run in parallel, and which branches of code depend on which other branches of code. That’s also a really interesting idea.