1. 13
    1. 2

      Did anyone else notice that 3 out of 4 pro’s listed for static linking are also attributes that scientific work in general is supposed to have? Supports the recommendation on static linking more strongly if seen in that light.

    2. 2

      Another pro is that a statically-linked binary only requires one metadata lookup, vs potentially dozens or hundreds of lookups for libraries. Many scientific apps run on HPC clusters with parallel filesystems like Lustre, which have few (or one) metadata servers to many object servers. When several thousand nodes try to launch the same application at the same time, in order to wire up a parallel job, the result can look a lot like a DoS attack on the metadata server…

      (This is very much a special case relative to the larger scientific computing world, but one I’m very familiar with. Ah, Lustre, how I hate you…)

    3. 1

      Frankly, I think static linking should be used in most cases anyway. It’s more deterministic that way. I never really understood the benefits of dynamic linking. Any of the features dynamic linking provides can be solved by binary patches or recompiling (which I’m not particularly adverse to). Since static executables are simpler, there’s less room for error.

      1. [Comment removed by author]

        1. 2

          That’s true. I guess it does have advantages in a few cases (particularly glibc), but I’ve been running a musl-based Linux distro for a while (with many of the packages being statically compiled), and really haven’t noticed a performance hit. I don’t have any benchmarks, but the feel is pretty much the same. I just don’t see why everything has to be dynamically linked, since that creates a web of dependencies - which is never fun to deal with.

        2. 1

          In theory you could get this with deduping in the virtual memory system.

          1. [Comment removed by author]