Threads for clacke

    1. 9

      Here is the paragraph that is at the core of explaining the eye-catching title:

      An average business software has about five million lines of code. It is not uncommon to find software five and ten times that size. The program is written and maintained by a team of scores or even hundreds of programmers at any one time, many more during its entire lifetime; none of them has written “the program.” In a good team, many programmers would have a vague understanding of some significant portion of the system (say, 30-80%) and a thorough understanding of a very small portion of it (5-10%). A programmer doesn’t write a program. She writes a feature — or ten. This feature interacts with the other ten thousand features in the program in ways that are hard for the programmer to predict with precision.

    2. 2

      Hey @tankf33der, would you like to explain what was the issue with pil before pil21 and why it couldn’t enter Debian and Ubuntu? Something about the PIE/PIC format and how the assembly worked? How does this LLVM version solve it? How is it written and why could it now be included in Debian?

      1. 3

        Different versions of Picolisp is already around distros. In general you dont need package manager(s) and you must have the power to type just several commands.

        Prev. version of PicoLisp (we call it pil64) has its own assembler inside with backend to several popular platforms. LLVM brings portability and its already possible to run PicoLisp on MacOS and Solaris (SPARC), RISC-V is near too.


        • One year development from idea to initial public release;
        • Works in LLVM7+;
        • Self bootstrap out of box (you dont need prev. versions of picolisp to generate LLVM-IR code);
        • Tests covered 95% of 370 builtin primitives;
        • All ecosystem and features inside works. already on pil21;
        • PilBox for Android works on pil21 too;
        • Several VCS unofficial mirrors for distribution: git1, git2, darcs, pijul;

        Happy coding.

        1. 2


          I find it particularly interesting that the bootstrap implementation language is LLVM-IR. What ships in the repo as bootstrap is some .ll files, and the built binary can compile the corresponding .l (PicoLisp) files into the .ll files.

          Did I miss some file, or does this mean that there is:

          • no CPU-specific asm at all in the project
          • no asm that doesn’t ultimately come from PiL code? It’s PiL turtles all the way down?

          Is porting to e.g. RISC-V a noop and you’re just waiting for LLVM to mature, or is there something more you need to do?

          1. 3

            No asm, llvm-ir only.

          2. 2

            Also if I remember correctly earlier PiL didn’t have any compilation at all, just an asm runtime an then AST interpretation from there on. Did this change in pil21? Does it actually compile my code via LLVM-IR, or is this just a special mode that exists for the sake of the bootstrap?

    3. 6

      Wow, their commits are … idiosyncratic.

        1. 2

          That’s because this isn’t an official git repo. The upstream is just a daily tarball.

      1. 2

        At least it’s not quite as bad as e.g. Bash, but yeah…

      2. 1

        Nothing special, just special PicoLisp with LLVM-IR as raw backend.

    4. 2

      I was going to say “Isn’t Swift just a nicer syntax for Objective-C?”.

      Apple’s Swift programming language is also memory safe, while its predecessor, Objective-C, was not.


    5. 3

      I don’t really see the point of using microG? What’s the big benefit of a third party client if one does still continue to use Googles servers?

      Personally, I use CopperheadOS without any google services. Works well for me, but I did not even use GMaps, Whatsapp and so on back when I had “Google Play Services” on my phone. Nowadays it’s fdroid and all its apps, like OSMAnd and so on. Signal from its self-updating APK with a websocket connection instead of “Google Cloud Messaging”. Works just fine, but uses quite a lot of battery.

      1. 5

        Push notifications are a deal breaker for me, also, there are a lot of apps that purposely warn you and kick you out of the app if no Play Services are installed (Google has some security certifications where they mandate you to disallow users without play services from using apps).

        Also, everything I can get from F-Droid that serves my need I do, but not every app that I need has a FOSS alternative (slack, bank, spotify, zoom - to name a few). Plus, I really dislike the idea of having a ~150MB system app that sits their just to “support apps with services”. MicroG serves the basic functionality to provide you with things like push notifications, it’s size is tremendously smaller and it’s open source.

        1. 2

          I’m in the Copperhead camp and don’t really miss push notifications. Slack doesn’t ping me anymore, but that’s just another reason to migrate to FOSS apps designed to work without Google Play (riot/matrix,, signal, etc). Cutting down on notifications from bank apps could even be interpreted as a positive side effect.

          One security downside to MicroG is that you need to enable signature spoofing so that it can impersonate the official Google Play Services.

          1. 1

            One security downside to MicroG is that you need to enable signature spoofing so that it can impersonate the official Google Play Services.

            LineageOS people keep saying that, but I don’t see the security issue. There is no slippery slope, and there is literally no other way to replace Google dependencies than to pretend you’re them.

    6. 9

      TL;DR use microG LineageOS

    7. 2

      These are probably the weakest arguments against Bitcoin I’ve seen. But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

      Real arguments against Bitcoin are:

      And I’m sure there are others but literally none of the ones presented here are valid.

      1. 29

        These are probably the weakest arguments against Bitcoin I’ve seen.

        As it says, this is in response to one of the weakest arguments for Bitcoin I’ve seen. But one that keeps coming up.

        But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

        When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business.

        1. 19

          I would also like to be able to upgrade my gaming PC’s GPU without spending what the entire machine cost.

          This is getting better though.

          1. 1

            For what it’s worth, Bitcoin mining doesn’t use GPUs and hasn’t for several years. GPUs are being used to mine Ethereum, Monero, etc. but not BItcoin or Bitcoin Cash.

        2. 0

          When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business

          And yet, still less electricity than… Christmas lights in the US or gold mining.

          1. 21

            When you reach for “Tu quoque” as your response to a criticism, then you’ve definitely run out of decent arguments.

      2. 13

        Bitcoin (and all blockchain based technology) is doomed to die as the price of energy goes up.

        It also accelerates the exaustion of many energy sources, pushing energy prices up faster for every other use.

        All blockchain based cryptocurrencies are scams, both as currencies and as long term investments.
        They are distributed, energy wasting, ponzi scheme.

        1. 2

          wouldn’t an increase in the cost of energy just make mining difficulty go down? then the network would just use less energy?

          1. 2

            No, because if you reduce the mining difficulty, you decrease the chain safety.

            Indeed the fact that the energy cost is higher than the average bitcoin revenue does not means that a well determined pool can’t pay for the difference by double spending.

            1. 3

              If energy cost doubles, a mix of two things will happen, as they do when the block reward halves:

              1. Value goes up, as marginal supply decreases.
              2. If the demand isn’t there, instead the difficulty falls as miners withdraw from the market.

              Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value. This cost is what secures the blockchain by making attacks costly.

              1. 1

                Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value.

                You forgot one word: average.

                1. 2

                  It is implied. The sentence makes no sense without it.

                  1. 1

                    And don’t you see the huge security issue?

        2. 1

          Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

          PoS has no such energy requirements. Peercoin (2012) was one of the first, Blackcoin, Decred, and many more serve as examples. Ethereum, #2 in “market cap”, is moving to PoS.

          So to say “ [all blockchain based technology] is doomed to die as the price of energy goes up” is silly.

          1. 1

            Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

            Hum… are you saying that Bitcoin miners have no brain? :-D

            I know that PoS, in theory, is more efficient.
            The fun fact is that all implementation I’ve seen in the past were based on PoW based crypto currencies stakes. Is that changed?

            As for Ethereum, I will be happy to see how they implement the PoS… when they will.

            1. 2

              Blackcoin had a tiny PoW bootstrap phase, maybe weeks worth and only a handful of computers. Since then, for years, it has been purely PoS. Ethereum’s goal is to follow Blackcoin’s example, an ICO, then PoW, and finally a PoS phase.

              The single problem PoW once reasonably solved better than PoS was egalitarian issuance. With miner consolidation this is far from being the case.

              IMHO, fair issuance is the single biggest problem facing cryptocurrency. It is the unsolved problem at large. Solving this issue would immediately change the entire industry.

              1. 1

                Well, proof of stake assumes that people care about the system.

                It see the cryptocurrency in isolation.

                An economist would object that a stake holder might get a lot by breaking the currency itself despite the loss in-currency.

                There are many ways to gain value from a failure: eg buying surrogate goods for cheap and selling them after the competitor’s failure has increased their relative value.

                Or by predicting the failure and then causing it, and selling consulting and books.

                Or a stake holder might have a political reason to demage the people with a stake in the currency.

                I’m afraid that the proof of stake is a naive solution to a misunderstood economical problem. But I’m not sure: I will surely give a look to Ethereum when it will be PoS based.

        3. 0

          doomed to die as the price of energy goes up.

          Even the ones based on proof-of-share consensus mechanisms? How does that relate?

          1. 3

            Can you point to a working implementation so that I can give a look?

            Last time I checked, the proof-of-share did not even worked as a proof-of-concept… but I’m happy to be corrected.

            1. 2

              Blackcoin is Proof of Stake. (I’ve not heard of “Proof of Share”).

              Google returns 617,000 results for “pure pos coin”.

            2. 1

              Instructions to get on the Casper Testnet (in alpha) are here: . No need to bold your words to emphasize your beliefs.

              1. 3

                The emphasis was on the key requirement.

                I’ve seen so many cryptocurrencies died few days after ICO, that I raised the bar to take a new one seriously: if it doesn’t have a stable user base exchanging real goods with it, it’s just another waste of time.

                Also, note that I’m not against alternative coins. I’d really like to see a working and well designed alt coin.
                And I like related experiments as GNU Teller.

                I’m just against scams and people trying to fool other people.
                For example, Casper Testnet is a PoS based on a PoW (as Etherum currently is).

                So, let’s try again: do you have a working implementation of a proof of stake to suggest?

                1. 1

                  It’s not live or open-source, so I’d understand if you’re still skeptical, but Algorand has simulated 500,000 users.

                2. 1

                  Again I don’t seem to understand your anger. We’re on a tech site discussing tech issues. You seem to be getting emotional about something that’s orthogonal to this discussion. I don’t think that emotional exhorting is particularly conducive to discussion, especially for an informed audience.

                  And I don’t understand what you mean by working implementation. It seems like a testnet does not suffice. If your requirements are: widely popular, commonly traded coin with PoS, then congratulations you have built a set of requirements that are right now impossible to satisfy. If this is your requirement then you’re just invoking the trick question fallacy.

                  Nano is a fairly prominent example of Delegated Proof of Stake and follows a fundamentally very different model than Bitcoin with its UTXOs.

                  1. 3

                    No anger, just a bit of irony. :-)

                    By working implementation of a software currency I mean not just code and a few beta tester but a stable userbase that use the currency for real world trades.

                    Actually that probably the minimal definition of “working implementation” for any currency, not just software ones.

                    I could become a little lengthy about vaporware, marketing and scams, if I have to explain why an unused software is broken by definition.
                    I develop an OS myself tha literally nobody use, and I would never sell it as a working implementation of anything.

                    I will look to Nano and delegated proofs of stake (and I welcome any direct link to papers and code… really).

                    But frankly, the sarcasm is due to a little disgust I feel for proponents of PoW/blockchain cryptocurrencies (to date, the only real ones I know working, despite broken as actual long term currency): I can understand non programmers that sell what they buy from programmers, but any competent programmer should just say “guys Bitcoin was an experiment, but it’s pretty evident that has been turned to a big ponzi scheme. Keep out of cryptocurrencies! Or you are going to loose your real money for nothing.”

                    To me, programmers who don’t explain this are either incompetent enough to talk about something they do not understand, or are trying to profit from those other people, selling them their token (directly or indirectly).

                    This does not means in any way that I don’t think a software currency can be built and work.

                    But as an hacker, my ethics prevent me from using people’s ignorance against them, as does who sell them “the blockchain revolution”.

              2. 2

                The problem is that in the blockchain space, hypotheticals are pretty much worthless.

                Casper I do respect, they’re putting a lot of work in! But, as I note literally in this article, they’re discovering yet more problems all the time. (The latest: the security flaws.)

                PoS has been implemented in a ton of tiny altcoins nobody much cares about. Ethereum is a great big coin with hundreds of millions of dollars swilling around in it - this is a different enough use case that I think it needs to be regarded as a completely different thing.

                The Ethereum PoS FAQ is a string of things they’ve tried that haven’t quite been good enough for this huge use case. I’ll continue to say that I’ll call it definitely achievable when it’s definitely achieved.

      3. 4

        ASICboost was fixed by segwit. Bitcoin isn’t subject to ASICboost anymore, but Bitcoin Cash is.

    8. 10

      Now you get to play the “oh, I want you to use the new libstdc++ over in some weird separate path” game. This is when you start playing with “-rpath” (which itself starts getting mighty spooky when you start playing with $ORIGIN) or maybe you try “-static-libstdc++” to drop the dependency entirely (and grow the binary accordingly).

      Excuse me, do you have a moment to talk about the developer’s lord and savior, Nix?

    9. 1

      Sure beats “refactor never”.

    10. 2 lists 17 platforms. Why the illumos tag? I don’t see anything Solaris-specific on the Con page.

      1. 6

        while pkgsrc supports many plattforms it is also the default package system on NetBSD and SmartOS, one of the illumos distributions. So it is probably of special interest in those two communities.

        pkgsrcCon is a pretty open conference and everyone interested in packaging or portability will be welcome of course!

        1. 2

          I didn’t know that! Thanks for explaining.

          Maybe that should be advertised on the pkgsrc landing page? I think it’s a feather in the cap for pkgsrc and NetBSD that another OS has picked it up as its default.

      2. 2

        We’re really not Solaris anymore, and haven’t been for almost a decade. I don’t think the talks are up yet but I can find out if the people we employ at Joyent to focus on pkgsrc are going!

        1. 1

          As I believe Oracle Solaris is dead or in effect dead, isn’t illumos the Solariest OS out there, and maybe the de facto Solaris?

          If I want the unique features of Solaris from a decade ago, isn’t illumos the place to go and aren’t they mostly still unique features?

          You make it sound like being associated with Solaris would be a bad thing. :-)

          pkgsrc platforms page still says Solaris / SmartOS / illumos. :-)

          1. 1

            If you wanted to run Solaris on your sparc hardware, illumos wouldn’t be my first stop.

            1. 1

              What would be?

              Is that really a unique feature of Solaris, though? NetBSD and Linux both run on Sparc, IIRC.

    11. 1

      Funny coincidence that I would be reading this now when I first heard of LAPACK the other day, because it’s one of those Fortran programs still hanging around.

    12. 1

      Super hyped for web asm. I’m so sick of JS heavy websites slowing down to a crawl on my phone but as a dev I love the flexibility having a single page app and a separate backend gives me.

      1. 1

        I love the flexibility having a single page app and a separate backend gives me.

        In a previous life, I did some web development (never as my primary job, but I had to write a web-based frontend to a bigger project because there was no one else to do it). I’m sure I didn’t invent the concept, but the app had a generic JSON-RPC interface and the web app was simply a JSON-RPC client running in the browser.

        Made testing much simpler, and since the app had to be accessible to a lot of different things (not just browsers), the universal RPC interface really helped.

        (This was long, long ago. I left that job in…2009? 2010? Long before WebAssembly was anything more than a “wouldn’t it be neat if…”.)

        1. 2

          I’m sure I didn’t invent the concept, but the app had a generic JSON-RPC interface and the web app was simply a JSON-RPC client running in the browser.

          Yeah, that’s the idea with too, but you were 2 years ahead of that. :-)

          Mastodon is pretty much that too – so much so that Pleroma can just implement the API and run the Mastodon frontend as an alternative frontend.

    13. 1

      A bit orthogonal but still related to the overall theme: are there enough performance/portability benefits of JITs to continue running server applications targeting bytecode platforms? JVM was originally built for safely running web applications, and the CLR seems to exist to allow for portability across the wide range of hardware running Windows.

      Are fancy VMs and tiered JITs necessary in a world where we can cross-compile to most platforms we’d want to run on? Languages/run times like Go and Haskell have backends that target a wide range of architectures, and there’s no need get intimately familiar with things like the JVM instruction set and how to write “JIT friendly code”.

      1. 2

        IBM i (nee OS/400) has an interesting solution where software is delivered compiled for a virtual machine, then compiled from virtual machine bytecode to native code on installation. I would like to see that model expand to other platforms as well.

        1. 2

          OS/400 is just…so different in so many ways. So many really interesting ideas, but it’s very different from just about every other operating system out there.

          I wish there were a way I could run it at home, just as a hobbyist.

    14. 3

      I love how it claims to be based on “reason” and still resorts to “bs” to get things done. Political commentary as code.

      1. 3

        Reason is a syntax for OCaml. BuckleScript is the compiled bridge to JS.

        1. 1

          Yes. I just like the way the names play out.

    15. 3

      Jef Raskin is not exactly an unsung genius anymore, but I’d still say “undersung”. Something of a tragic hero in the classical sense.

      The Canon Cat is a little rare and expensive when you can find it, but a Swyftcard replica for your vintage Apple II is pretty affordable still. Also the Cat software (written in Forth!) is fully emulated in the MAME suite. If you find this stuff interesting, I’d strongly suggest at least reading Raskin’s book The Humane Interface. Wikipedia’s page on Archy has some interesting tidbits too.

      Apart from some good ideas on human interface design, there is a broader lesson to be learned about the real political and economic reasons why technical projects succeed or fail. Smalltalk, Oberon, Lisp machines, BeOS, NeXT, the Newton… all worth study.

      Lobsters, what are your favorite coulda-been systems? I’d especially love to hear from the old-timers among us.

      1. 1

        I’m the author of the above piece – which I intended to be part of a whole series on coulda-been systems (before Sonya got a real job / hit crunch time & Exolymph essentially shut down). I specifically wanted to cover Jot, xu88, & xu92, along with HyperCard & HyperTIES, at the time. (As I’m learning more about NeWS, I’m getting more excited to cover it as well.)

        I probably gave the Lisa a short shrift in this, since my primary source for that was & period documentation. I got to use a Lisa, briefly, last weekend & it was not as bad as I had heard. So, maybe if I do the rest of the series, I’ll cover the Lisa in more detail, since I now know somebody with fairly deep knowledge.

        Other candidates: Plan9, the Alto, OpenDoc, the Cambridge Z88.

        I recently submitted the intended second entry in the series, about the personal robot market of the mid 1980s (with a special focus on the Arctec Gemini), to Tedium, & once it’s published I’ll post it.

      2. 1

        Mainstream PowerPC Amiga. :’(

    16. 6

      So what went wrong?


      Worse is better. Billions of dollars have gone into the cryptocurrency system and we have a new dotcom boom.

      Most of the money will be spent on the wrong things, but there is orders of magnitude more money going into actually solving the issues displayed by Bitcoin than there would have been had it not escaped the lab. There are orders of magnitude more enthusiasts coming up with ideas around it than there would have been if this stuff had been trapped in academia for a decade.

    17. 5

      We have to accept that mistakes will be made and just hope that none of them are crippling. After all, if developers are cursing our design decisions years from now, that means we succeeded!

      A lovely way of framing it.

      1. 4

        Reminds me of a Stroustroup quote: There are only two kinds of programming languages, the ones people complain about and the ones nobody uses.

        1. 1

          And that reminds me of Alan Kay saying the Macintosh (I believe it was) was the first computer worth criticizing.

    18. 1

      I am always happy to see alternatives to gitflow (which I think is overly complicated for many projects). This is a nice idea, but perhaps it works best with specific types of development. A few thoughts:

      If two developers are working on separate features that affect the same piece of code

      if (featureA) {
        // changes from developer A
      } else if (featureB) {
        // changes from developer B
      } else {
        // old code

      How do you rename or remove a variable as part of refactoring, in a way that makes all four combinations of feature flags still work?

      I guess it will depend on the types of changes, and how the developers communicate, if this is easier than feature branches (where the conflict resolution is deterministic at the end, and diff works), or if this leads to feature flag spaghetti, and time wasted adapting your changes to be able to run alongside another developers changes, which might end up not being merged.

      Also, what if you add a file? then I guess your build system will need feature flags. What if your build system uses globbing and you remove or rename a file? Some changes can’t both be there and not be there.

      1. 1

        I know people are going to feel differently about this, but I lean heavily toward explicit being better than implicit, and the presence of magic should be minimal, even if it means some redundant work. Redundant work can be verified and semi-automated to keep explicit things up-to-date.

        How do you rename or remove a variable as part of refactoring, in a way that makes all four combinations of feature flags still work?

        A feature branch just delays this question to the big bang merge conflict. Forcing you to do this work upfront means you talk about this with the other people working in the same code region.

        Like you say, the other side of the coin is that the feature branch might never be merged. Early merging optimizes for the happy path. But then again, if you merge your work early and discuss with others, it can’t remain in the twilight state of not merged, not discarded, which may improve coder efficiency.

        Also, what if you add a file? then I guess your build system will need feature flags.

        Just adding a file shouldn’t affect anything unless some other file references it.

        What if your build system uses globbing

        Please don’t, especially if you also automatically siphon those files into some automatic extension of functionality.

        Merge conflicts are annoying, but clean merges that have semantic conflicts are even worse.

        Of course plugin systems are super useful – when they are user accessible and are used for deployment. But then the API would be well-defined, restricted and conservative. Probably the plugins would even be in separate repos and the whole branch vs flag point is moot.

        Testing plugin interactions is probably worth an article series of its own.

    19. 4

      First, to call itself a process could [simply] execute /proc/self/exe, which is a in-memory representation of the process.

      There’s no such representation available as a file. /proc/self/exe is just a symlink to the executable that was used to create the process.

      Because of that, it’s OK to overwrite the command’s arguments, including os.Args[0]. No harm will be made, as the executable is not read from the disk.

      You can always call a process with whatever args[0] you like. No harm would be done.

      1. 4

        Although /proc/self/exe looks like a symbolic link, it behaves differently if you open it. It’s actually more like a hard link to the original file. You can rename or delete the original file, and still open it via /proc/self/exe.

        1. -4

          No harm will be made, as the executable is not read from the disk.

          the executable is definitely read from the disk

          Again, this was only possible because we are executing /proc/self/exe instead of loading the executable from disk again.


          The kernel already has open file descriptors for all running processes, so the child process will be based on the in-memory representation of the parent.

          no that’s not how it works, and file descriptors aren’t magic objects that cache all the data in memory

          The executable could even be removed from the disk and the child would still be executed.

          that’s because it won’t actually be removed if it’s still used, not because there’s a copy in memory

          <3 systems engineering blog posts written by people who didn’t take unix101

          1. 12

            Instead of telling people they are idiots, please use this opportunity to correct the mistakes that the others made. It’ll make you feel good, and not make the others feel bad. Let’s prop up everyone, And not just sit there flexing muscles.

          2. 3

            Sorry for disappointing you :)

            I got that (wrongly) from a code comment in Moby (please check my comment above) and didn’t check the facts.

          3. 2

            I’m not saying that the OP was correct, I’m just saying that:

            /proc/self/exe is just a symlink to the executable

            ,,, is also not completely correct.

      2. 3

        Thanks for pointing out my mistakes! I just fixed the text.

        I made some bad assumptions when I read this comment [1] in from from Docker and failed to validate it. Sorry.

        By the way, is it just by bad English or that comment is actually wrong as well?


        1. 1

          that comment is actually wrong as well?

          I don’t think it’s strictly correct, but for the purpose of the code in question it is accurate. That is, /proc/self/exe points to the executable file that was used to launch “this” process - even if it has moved or been deleted - and this most likely matches the “in memory” image of the program executable; but I don’t believe that’s guaranteed.

          If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes.

          1. 3

            but I don’t believe that’s guaranteed.

            I think it’s guaranteed on local file systems as a consequence of other behavior. I don’t think you can open a file for writing when it’s executing – you should get ETXTBSY when you try to do that. That means that as long as you’re pointing at the original binary, nobody has modified it.

            I don’t think that holds on NFS, though.

          2. 1

            If you want to test and make sure, try a program which opens its own executable for writing and trashes the contents, and then execute /proc/self/exe. I’m pretty sure you’ll find it crashes

            Actually, scratch that. You won’t be able to write to the executable since you’ll get ETXTBUSY when you try to open it. So, for pretty much all intents and purposes, the comment is correct.

          3. 1

            Interesting. Thank you for your insights.

            In order to satisfy my curiosity, I created this small program [1] that calls /proc/self/exe infinitely and prints the result of readlink.

            When I run the program and then delete its binary (i.e., the binary that /proc/self/exe points to), the program keeps successfully calling itself. The only difference is that now /proc/self/exe points to /my/path/proc (deleted).


    20. 4

      Never heard of ATS, and it’s not linked to in the post. Here it is:

      ATS is a statically typed programming language that unifies implementation with formal specification. It is equipped with a highly expressive type system rooted in the framework Applied Type System, which gives the language its name. In particular, both dependent types and linear types are available in ATS.