1. 55
    1. 39

      As (former) application author I find it very hard to sympathize with distro packagers if their opinions and the mentioned patches they make out of them continue to be responsible for a good chunk of bugreports that cannot be reproduced outside of their distro. Why should I cater to the whims of multiple Linux distros, what do I get out of putting more work into the product I already provide for free? Imagine Apple app store, on top of placing random restrictions on application submissions, added random patches to your application and is not sufficiently careful about which of them break the end user experience. That is what upstream maintainers have to deal with, and they don’t even get paid for it.

      See also Linus on static linking and distro packaging.

      Keep in mind that 1) this is literally only a problem on Linux distros or other third-parties repackaging applications and imposing their opinions on everybody 2) the actual context of this blogpost is that the author is mad at Python packages using Rust dylibs, it seems his sense of entitlement has not significantly improved since then.

      1. 16

        Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

        If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

        So while we don’t have Linus dive tool in Void Linux yet, it looks easy to package and once that is done, the maintainers will take care to provide the newest version to the users, taking off work from you.

        We also generally only try to patch build issues and security fixes that did not make it into a release yet. So often, users of our binary packages get fixed versions quicker than upstream.

        1. 6

          Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

          I think the point of cognitive dissonance here is that what distro maintainers want often makes application developer’s lives harder. Dynamic linking doesn’t work well for many application developers, because libraries break even when they don’t change “major” versions: that’s just a fact of life. No software development process is perfect, and the application developer can’t reasonably test against every different patch that every different distribution applies to every different library. Being able to just drop a binary onto a machine and be confident it’ll work the same on that machine as it does on your own is a selling point of languages like Go and Rust.

          And if you want to change the libraries used for these languages it’s not exactly hard. Just change the go.mod or Cargo.toml to point to the library you want it to use, rather than the library it’s currently using, and rebuild.

          If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

          Docker and co are worse for security than static linking. Packaging as a Docker container incurs all of the downsides of static linking, and also all of the downsides of any outdated packages in the image. Static linking only distributes the libraries you need: containers distribute effectively an entire OS minus the kernel (and also the libraries you need).

          Docker as a solution only makes sense if application developers want both dynamic linking and static linking; dynamic if you install it on the host, and effectively-static if you run it as a container. But the core issue is that many application developers do not want dynamic linking! If you do not want dynamic linking, static linking is better than using containers.

          1. 4

            I think the article confuses two separable things:

            • Bundling in the shipped product.
            • Provenance of inputs.

            The former is a problem in terms of computational resource, but not much else. If a program statically links its dependencies (or uses C++ header-only libraries, or whatever), then you need to redo at least some of the build every time there’s an update (and generally you redo the whole build because incremental builds after dependency updates are flaky). The FreeBSD project can rebuild the entire package collection (30,000+ packages) in under two days on a single machine, so in the era of cloud computing that’s a complete non-issue unless you’re running Gentoo on an old machine.

            The second is a much bigger problem. If there’s a vulnerability in libFoo, a distro bumps the version of libFoo. Anything that has libFoo as a build-time dependency is rebuilt. Security update fixed, we just burned some cycles doing the rebuild (though, in the case of static linking, possibly a lot fewer than we’d burn by doing dynamic linking on every machine that ran the program). If a program has vendored its dependency on libFoo, there’s no metadata conveniently available for the distribution that tells anyone that it needs to be rebuilt against a newer libFoo. It’s up to the program author to issue a security advisory, bump the library version, and so on. The distro will keep shipping the same library for ages without any knowledge.

            Things like Docker make this worse because they make it trivial to write custom things in the build that grab source from random places and don’t record the provenance in an auditable structure. If I have an OCI image, I have absolutely no idea what versions of any libraries I’m running. They may be patched by the person who built the container to avoid a bug that caused problems for a specific program and that patch may have introduced another vulnerability. They may be an old version from some repo. They may be the latest trunk version when the container was released.

        2. 5

          Securitywise Docker images are about as bad as static linking for the end user.

          1. 3

            Of course, but it’s easier on the entire supply chain in the 99.9% of cases there is no security problem.

            1. 9

              99.9%? do you mean 40%?


              “Over 60 percent of the top Docker files held a vulnerability that had a Kenna Risk Score above 330; and over 20 percent of the files contained at least one vulnerability that would be considered high risk under Kenna’s scoring model,” […] the average (mean) number of CVEs per container is 176, with the median at 37.

          2. 3

            Yes, and static linking has a known solution for security updates: the distro rebuilds from updated source.

            1. 3

              Yes, but this needs to be done so often and so broadly, that at least Debian just seems to do regular rebuilds of nearly everything every few weeks or so in unstable and declares that software written in Go has no proper security support in at least Debian 10 Buster and security updates will only be provided via the minor stable updates approximately every two months or so. Still a PITA and hence q.e.d.

        3. 5

          If you absolutely want to provide your own binaries directly to endusers

          You say this like it’s a method of last resort, but this is overwhelmingly how software authors prefer to package and distribute their applications. There’s lots of good reasons for that, and it’s not going to change.

        4. 1

          Ideally you don’t need to do anything except not make distro maintainer’s lifes harder.

          I don’t even need to do that. Again, I am providing free work here.

          If you absolutely want to provide your own binaries directly to endusers, as of 2020 there are things like Docker images and AppImage now so you can bundle what you need at this level.

          I am fairly sure if people started to do that at scale, distro maintainers would complain all the same as they do about static linking.

          So while we don’t have Linus dive tool in Void Linux yet, it looks easy to package and once that is done, the maintainers will take care to provide the newest version to the users, taking off work from you.

          You’re wholly missing the point with this sentence. The fact that we’re in a position where we need to build applications per-distro is unsustainable. There is very little work in building a static binary on any other platform.

          We also generally only try to patch build issues and security fixes that did not make it into a release yet. So often, users of our binary packages get fixed versions quicker than upstream.

          Yes, and then the users report bugs regressions in a version that is not supposed to have the patch that introduced it. This is literally what I am complaining about.

      2. 6

        Keep in mind that 1) this is literally only a problem on Linux distros or other third-parties repackaging applications and imposing their opinions on everybody 2) the actual context of this blogpost is that the author is mad at Python packages using Rust dylibs, it seems his sense of entitlement has not significantly improved since then.

        How is this relevant to static linking and the discussion about its security issues?

        1. 3

          Because it’s the reason this discussion continues to exist.

          1. 3

            So in summary people are still angry about cryptography and Rust and so they keep posting roundabout takes on it and people get onto news aggregator sites to hawk their positions but not work on a solution? I’m really not sure how that’s productive for anyone.

            1. 1

              I publish static binaries for my applications. Now I have a third party who wants to redistribute my free work but wants me to change the way I write software so their use of my free work gets easier (for a debatable value of easier).

              Frankly I don’t see a problem I have to solve. My way works just fine on Windows.

              1. 1

                At this point it’s up to all the parties to coordinate. It’s obvious that each of the different parties have different perspectives, desires, and annoyances. If you put your shoes in any of the various parties (application developers, distro maintainers, application users, distro users), and there’s plenty on this thread and the HN version of this link, then I think you can see the many angles of frustration. I don’t think getting angry on message boards is going to settle this debate for anyone, unless you’re just looking to vent, which I’d rather not see on lobste.rs and instead on chatrooms.

            2. [Comment removed by author]

      3. 5

        This is only a problem on Linux. The fact that anybody can create a Linux distribution means that there are lot of systems that are largely similar and yet wholly incompatible with one another. Bazaar-style development has encouraged this pattern and, as such, we have a fragmentation of Linux that have just the tiniest little differences that make packaging an app near impossible to do in an universal fashion. Like it or not, cathedral-style systems do not suffer from this problem. You can count on the libc and loader to exist in a well known and understood location in FreeBSD, Windows, and MacOS. Sure, there are going to be differences in between major versions, but not so much as the difference between glibc and musl.

        Having your own packaging system then frees you, the application developer, from having to wait on the over 9,000 different Linux distributions to update their packages so that you can use a new shiny version of a dependency in your app. Furthermore, there are plenty of commercial, proprietary, software packages that don’t need to move at the same cadence as their deployed Linux distribution. The app might update their dependencies more frequently while in active development or less frequently if the business can’t justify the cost of upgrading the source code.

        I lay out that this situation is not unique to Linux, but rather, it exists because of Linux’s fragmentation… And secondarily as a result of the friction associated with walled-garden ecosystems like Apple.

    2. 32

      Wow, this blog post is so lacking in empathy for users that I’m surprised it made it on a reputable distro’s blog. Instead of spilling 1000 words on why “static linking is bad”, maybe spend a little time thinking about why people (like me) and platforms (like go/rust et al) choose it. The reason people like it is that it actually works and it won’t suddenly stop working when you change the version of openssl in three months. It doesn’t even introduce security risks! The only difference is you have to rebuild everything on that new version, which seems like a small price to have software that works, not to mention that rebuilding everything will also re-run the tests on that new version. I can build a go program on nixos, ship it to any of my coworkers and it actually just works. We are on a random mix of recent ubuntu, centos 7 and centos 8 and it all just works together. That is absolutely not possible with dynamic linking

      1. 20

        It works well if all you care about is deploying your application. As a distro maintainer, I’m to keeping track of 500 programs, and having to care about vendored/bundled versions and statically linked in dependencies multiplies the work I have to do.

        1. 24

          But … that’s a choice you make for yourself? No application author is asking you to do that, and many application authors actively dislike that you’re doing that, and to be honest I think most users don’t care all that much either.

          I’ve done plenty of packaging of FreeBSD ports back in the day, and I appreciate it can be kind of boring thankless “invisible” gruntwork and, at times, be frustrating. I really don’t want to devalue your work or sound thankless, but to be honest I feel that a lot of packagers are making their own lives much harder than it needs to be by sticking to a model that a large swath of the software development community has, after due consideration and weighing all the involved trade-offs, rejected and moved away from.

          Both Go and Rust – two communities with pretty different approaches to software development – independently decided to prefer static linking. There are reason for that.

          Could there be some improvements in tooling? Absolutely! But static linking and version pinning aren’t going away. If all the time and effort spent on packagers splitting things up would be spent on improving the tooling, then we’d be in a much better situation now.

          1. 14

            …but to be honest I feel that a lot of packagers are making their own lives much harder than it needs to be by sticking to a model that a large swath of the software development community has, after due consideration and weighing all the involved trade-offs, rejected and moved away from.

            I think this is a common view but it results from sampling bias. If you’re the author of a particular piece of software, you care deeply about it, and the users you directly interact with also care deeply about it. So you will tend to see benefits that apply to people for whom your software is of particular importance in their stack. You will tend to be blind to the users for whom your software is “part of the furniture”. From the other side, that’s the majority of the software you use.

            Users who benefit from the traditional distribution packaging model for most of their software also find that same model to be painful for some “key” software. The problem is that what software is key is different for different classes of user.

            1. 10

              A big reason people ship binaries statically linked is so it’s easier to use without frills, benefiting especially users who aren’t deeply invested in the software.

              1. 10

                For me personally as an end user, if a program is available in apt-get then I will install it from apt-get first, every time. I don’t want to be responsible for tracking updates to that program manually!

                1. 2

                  I do that as well, but I think “apt-get or manual installs” is a bit of a false dilemma: you can have both.

      2. 8

        Static linking does introduce a security risk: ASLR made ineffective. Static linking creates a deterministic memory layout, thus making moot ASLR.

        1. 5

          Untrue, look up static-PIE executables. Looks like OpenBSD did it first, of course.

          1. 3

            I believe static PIE can only randomize a single base address for a statically linked executable, unlike dynamically linked PIE executable where all loaded PIC objects receive a randomized base address.

          2. 2

            I’m very familiar with static PIE. I’m unsure of any OS besides OpenBSD that supports it.

            1. 4

              Rustc + musl, supports it on linux, since gcc has a flag for it I imagine it’s possible to use it for C code too but I don’t know how.

            2. 2

              It was added to GNU libc in 2.27 from Feb 2018. I think it should work on Linux?

              1. 5

                Looks like it works on Linux with gcc 10.

                $ uname -a
                Linux phoenix 5.10.0-2-amd64 #1 SMP Debian 5.10.9-1 (2021-01-20) x86_64 GNU/Linux
                $ gcc -static-pie hello.c
                $ ./a.out
                Hello world!
                $ ldd a.out
                	statically linked

                Did a bit of rummaging in the exe header but I’m not 100% sure what I’m looking for to confirm there, but it had a relocation section and all symbols in it were relative to the start of the file as far as I could tell.

                Edit: Okay, it appears the brute-force way works. I love C sometimes.


                #include <stdio.h>
                int main() {
                    int (*p)() = main;
                    printf("main is %p\n", p);
                    return 0;


                $ gcc aslr.c
                $ ldd a.out
                	linux-vdso.so.1 (0x00007ffe47d2f000)
                	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2de631b000)
                	/lib64/ld-linux-x86-64.so.2 (0x00007f2de6512000)
                $ ./a.out; ./a.out; ./a.out
                main is 0x564d9cf42135
                main is 0x561c0b882135
                main is 0x55f84a94f135
                $ gcc -static aslr.c
                $ ldd a.out
                	not a dynamic executable
                $ a.out; a.out; a.out
                main is 0x401c2d
                main is 0x401c2d
                main is 0x401c2d
                $ gcc -static-pie aslr.c
                $ ldd a.out
                	statically linked
                $ a.out; a.out; a.out
                main is 0x7f4549c07db5
                main is 0x7f5c6bce5db5
                main is 0x7fd0a2aaedb5

                Note ldd distinguishing between “not a dynamic executable” and “statically linked”.

        2. 3

          Doesn’t it stop other kind of attacks on the other hand?

          1. 3

            What attacks would static linking mitigate?

            1. 3

              I don’t know much about security, and this is an honest question. My understanding is that ASLR is made to mostly protect when the executable is compromised due to stack overflow and similar attacks, right? Aren’t these problems mostly a thing of past for the two languages that abhor static linking, like go and rust?

            2. 3

              I really have no idea, that was an honest question!

              1. 6

                Those would be examples of local attacks. ASLR does not protect against local attacks. So we need to talk about the same threat vectors. :)

            3. 1

              There are a lot of code-injection vulnerabilities that can occur as a result of LD_LIBRARY_PATH shenanigans. If you’re not building everything relro, dynamic linking has a bunch of things like the PLT GOT that contain function pointers that the program will blindly jump through, making exploiting memory-safety vulnerabilities easier.

              1. 1

                As an author of open source malware specifically targeting the PLT/GOT for FreeBSD processes, I’m familiar with PLT/GOT. :)

                The good news is that the llvm toolchain (clang, lld) enables RELRO by default, but not BIND_NOW. HardenedBSD enables BIND_NOW by default. Though, on systems that don’t disable unprivileged process debugging, using BIND_NOW can open a new can of worms: making PLT/GOT redirection attacks easier over the ptrace boundary.

    3. 13

      I can understand that this is frustrating from a packager’s point of view, but I personally really like the reliability of static linking. Additionally, some of the modern languages provide really good tooling for managing this stuff. Want to figure out which Go binaries need to be updated for a security vulnerability? You can list the exact dependency version built into them with go version -m ./path/to/binary. Go modules explicitly prevents modules other than the top-level one from pinning dependency versions, which also limits the extent of the update-pinned-versions nightmare (though it doesn’t completely eliminate it).

      1. 7

        go version won’t tell you if any of the packages have problems: you need to manually look up every dependency, which can be some amount of work if there are a whole bunch. NPM, for example, will warn you on npm install. I think some more work on the Go tooling is needed here.

        But yeah, the path forward is clearly in better tooling, not reverting everything to C-style development anno 1992.

        1. 4

          I think we’re starting from a distributor’s position of “I have a bag of binaries, and a bag of vulnerable package versions. How do I match them up?” The dependency lookup was already happening, or is a given.

    4. 12

      This also implies that you need to have a system that keeps track of what library versions are used in individual programs.

      This seems relatively easy to implement compared to making everyone have stable ABIs for everything, including languages that can’t have one.

      The nu-school package managers have machine-readable lockfiles with exact versions of all libraries used. A distro could create and archive such lockfiles to be able to later know exactly which of their statically-built binaries contain vulnerable dependencies.

      1. 5

        In Go’s case, you can simply inspect the executable (using go version -m path/to/exe) to display the Go version used in the build, and all of the statically-linked module’s versions and checksums, so you don’t even need to keep around any extra build artifacts to determine which executable are vulnerable.

        I am unfamiliar with Gentoo’s tooling, but I do know that OpenBSD’s Go ports contain the complete package list of all dependency modules that will be required for the build to succeed (this is necessary to download everything ahead of time, so the build itself can be done offline and in a chroot). This makes it rather trivial to discover all ports which need to be patched and rebuilt for any given vulnerability fix.

    5. 11

      There are basically 5 classes of programs that the author discusses

      1. Fully dynamically linked programs. Only python programs are of this form.
      2. Partially dynamically linked programs. This describes C and C++ using dynamic libraries. The contents of the .c files are dynamically linked, and the contents of the .h files are statically linked. We can assume that the .h files are picked up from the system that the artifact is built on and not pinned or bundled in any way.
      3. Statically linked programs without dependency pinning. This describes rust binaries that don’t check in their Cargo.lock file to the repository, for instance.
      4. Statically linked programs with dependency pinning. This describes rust binaries that do check in their Cargo.lock file to the repository. (For simplicity sake we can include bundled but easily replaceable dependencies in this category)
      5. Programs with hard to replace bundled dependencies (statically or dynamically linked, for instance they complain about rustc llvm which is dynamically linked).

      I think it’s pretty clear that what the author is interested in isn’t actually the type of linking, they are interested in the ease of upgrading dependencies. This is why they don’t like python programs despite the fact that they are the most dynamically linked. They happen to have tooling that works for the case of dynamically linked C/C++ programs (as long as the header files don’t change, and if they do, sucks to be the user) so they like them. They don’t have tooling that works for updating python/rust/go/… dependencies, so they don’t like them.

      They do have a bit of a legitimate complaint here that it takes longer to relink all the statically link dependencies than the dynamically linked ones, but this strikes me as very minor. Builds don’t take that long in the grand scheme of things (especially if you keep around intermediate artifacts from previous builds). The benefit that we don’t have the C/C++ problem where the statically linked parts and the dynamically linked parts can come from different code bases and not line up strikes me as more than worth it.

      They seem to be annoyed with case 3 because it requires they update their tooling, and maybe because it makes bugs resulting from the equivalent of header file changes more immediately their problem. As you can guess, I’m not sympathetic to this complaint.

      They seem to be annoyed with case 4 because it also makes it makes the responsibility for breaking changes in dependencies shift slightly from code authors to maintainers, and their tooling is even less likely to support it. This complaint mostly strikes me as entitled, the people who develop the code they are packaging for the most part are doing so for free (this is open source after all) and haven’t made some commitment to support you updating their dependencies, why should it be their problem? If you look at any popular C/C++ library on github, you will find issues asking for support for exactly this sort of thing.

      Category 5 does have some interesting tradeoffs in both directions depending on the situation, but I don’t think this article does justice to either side… and I think getting into them here would detract from the main point.

      1. 5

        I was especially surprised to see this article on a gentoo blog, given that as I remember gentoo (admittedly from like 10-15 years ago), it was all about recompiling everything from source code, mainly For Better Performance IIRC. And if you recompile everything from source anyway, I’d think that should solve this issue for “static linkage” too? But maybe gentoo changed their way since?

        Looking at some other modern technologies, I believe Nix (and NixOS) actually also provide this feature of basically recompiling from source, and thus should make working with “static” vs. “dynamic” linking mostly the same? I’m quite sure arbitrary patches can be (and are) applied to apps distributed via Nix. And anytime I nix-channel --upgrade, I’m getting new versions of everything AFAIK, including statically linked stuff (obviously also risking occasional breakage :/)

        edit: Hm, Wikipedia does seem to also say Gentoo is about rebuilding from source, so I’m now honestly completely confused why this article is on gentoo’s blog, of all the distros…

        Unlike a binary software distribution, the source code is compiled locally according to the user’s preferences and is often optimized for the specific type of computer. Precompiled binaries are available for some larger packages or those with no available source code.

        1. 11

          “Build from source” doesn’t really solve the case of vendored libraries or pinned dependencies. If my program ships with liblob-1.15 and it turns out that version has a security problem, then a recompile will just compile that version again.

          You need upstream to update it to liblob-1.16 which fixes the problem, or maybe even liblob-2.0. This is essentially the issue; to quote the opening sentence of this article: “One of the most important tasks of the distribution packager is to ensure that the software shipped to our users is free of security vulnerabilities”. They don’t want to be reliant on upstream for this, so they take care to patch this in their packages, but it’s all some effort. You also need to rebuild all packages that use liblob<=1.15.

          I don’t especially agree with this author, but no one can deny that recompiling only a system liblob is a lot easier.

        2. 2

          AIUI the crux of Gentoo is that it provides compile-time configuration - if you’re not using e.g. Firefox’s Kerberos support, then instead of compiling the Kerberos code into the binary and adding “use_kerberos=false” or whatever, you can just not compile that dead code in the first place. And on top of that, you can skip a dependency on libkerberos or whatever, that might break! And as a slight side-effect, the smaller binary might have performance improvements. Also, obviously, you don’t need libkerberos or whatever loaded in RAM. Or even on disk.

          These compile-time configuration choices have typically been the domain of distro packagers, but Gentoo gives the choice to users instead. So I think it makes a lot of sense for a Gentoo user to have strong opinions about how upstream packaging works.

          1. 2

            But don’t they also advertise things like --with-sse2 etc., i.e. specific flags to tailor the packages to one’s specific hardware? Though I guess maybe hardwares are uniform enough nowadays that a typical gentoo user wants exactly the same flags as most others?

      2. 4

        This complaint mostly strikes me as entitled, the people who develop the code they are packaging for the most part are doing so for free (this is open source after all) and haven’t made some commitment to support you updating their dependencies, why should it be their problem?

        Maybe I’m reading too much into the post, but the complaints about version pinning seem to be implying that the application maintainers should be responsible for maintaining compatibility with any arbitrary version of any dependency the application pulls in. Of course application maintainers want to specify which versions there compatible with; it’s completely unrealistic to expect an application to put in the work to maintain compatibility with any old version that one distro or another might be stuck on. The alternative is a combinatoric explosion of headaches.

        Am I misreading this? I’m trying to come up with a more charitable reading but it’s difficult.

        1. 3

          I’m not sure. When I wrote a Lua wrapper for libtls, I attempted to support older versions, but the authors of libtls didn’t do a good job of versioning macros. I eventually gave up on older versions when I switched to a different libtls. I am not happy about this.

        2. [Comment removed by author]

      3. 3

        Don’t JVM and CLR programs also do all dynamic linking all the time, or almost so?

        1. 2

          Er, when I said “Only python programs are of this form.” I just meant of the languages mentioned in the article. Obviously various other languages including most interpreted languages are similar in nature.

          I think the JVM code I’ve worked on packaged all it’s (java) dependencies inside the jar file - which seems roughly equivalent to static linking. I don’t know what’s typical in the open source world though. I’ve never worked with CLR/.net.

          1. 3

            It depends…

            • Desktop or standalone Java programs usually consists of a collection of JAR files and you can easily inspect them and replace/upgrade particular libraries if you wish.
            • Many web applications that are deployed on a web container (e.g. Tomcat) or an application server (e.g. Payara) as WAR files, have libraries bundled inside. This is bit ugly and I do not like it much (you have to upload big files to servers on each deploy), however you still can do the same as in the first case – just need to unzip and zip the WAR file.
            • Modular applications have only their own code inside + they declare dependencies in a machine readable form. So you deploy small files e.g. on a OSGi container like Karaf and dependencies are resolved during the deploy (the metadata contain needed libraries and their supported version ranges). In this case you may have installed a library in many versions and proper one is linked to your application (other versions and other libraries are invisible despite they are present in the runtime environment). The introspection is very nice and you can watch how the application is starting, whether it is waiting for some libraries or other resources, you can install or configure them and then the starting process continues.

            So it is far from static-linking and even if everything is bundled in a single JAR/WAR, you can easily replace or upgrade the libraries or do some other hacking or studying.

    6. 10

      Thanks! Finally someone speaks out what causes a lot of pain to keep software packages secure in Linux distributions — and as it seems not only in binary distributions. (Not to mention software distributed as Snap, AppImage, Flatpak, Docker images, etc. which all have similar issues and should be avoided if you care about being able to track security issues in your installed software.)

    7. 7

      I often read “static linking is bad” and similar things. This is a sensible statement as to what Linux distribution packagers should avoid due to the security issues mentioned in this article. However, what I do not understand is why upstream should not statically link? Say, if I have an open-source project that provides binaries for download on its website for those who really want the cutting-edge version, why not allow this? Linux packagers for distributions can just ignore those binaries, build from source and link dynamically. To me, this sounds like a win-win situation for everyone: the developers who probably want current versions, the normal users, who rely on their distributions, and the power users, who want the cutting-edge software.

      I think the problem is not static linking per se. It is about upstream not allowing dynamic linking. That however is an entirely different argument and does not fit into the sentence “static linking is bad”. It is totally possible for upstream to add a build option to allow dynamic linking. If they do not do that, that is to critise, not static linking by itself.

      1. 3

        When using “static linking” you get away with a lot more than when you use dynamic linking. The article mentions for example bundling your own copy of SQLite, or even a compiler. Also, think of all the software that is shipped using including a (complete) browser runtime. Good luck untangling that as a downstream distribution, even when upstream is not hostile towards un-bundling efforts and accepts patches…

        Your argument makes sense when the “upstream” developers develop specifically for dynamic linking first, practice dependency hygiene and then do the little extra work to provide the static binaries themselves (or flatpak or snap). Going from static to dynamic is much more difficult than the other way around, especially when upstream keeps adding new dependencies every release from random git commits, or even private forks.

        The only safe way to use e.g. Go is to use 0 dependencies, so it is very easy to rebuild your app with new versions of Go and will always work ;-)

      2. 2

        I think the problem is not static linking per se. It is about upstream not allowing dynamic linking. That however is an entirely different argument and does not fit into the sentence “static linking is bad”. It is totally possible for upstream to add a build option to allow dynamic linking. If they do not do that, that is to critise, not static linking by itself.

        Eh, I think there’s more to it than this. The complaint here isn’t that upstream sometimes doesn’t allow dynamic linking – the complaint is essentially a demand that upstream should allow dynamic linking AND do an arbitrarily large amount of work to be compatible with any arbitrary version that downstream cares to dynamically link to fulfil their dependencies.

    8. 3

      I hear Cargo often hailed as a good package manager in programming circles:

      «If ony we had Cargo for C and C++»

      But Meson is essentially a superset of that, in that it also covers the use case of this discussion – dynamically linking everything. The dependency model that Meson facilitates (and upstream devs are encouraged to use) is to dynamically link with installed dependencies if found, then fall back to downloading and static linking. Best of both worlds! Why isn’t this more widely known?

      As I’ve found out, it is possible to kind of implement this manually in CMake too, but you don’t get patching, for example, so there are some missing features.

    9. 3

      The real problem here is the very existence of “distributions”. They were useful back when no one had a reliable internet connection, but now they’re more of a hindrance. They talk about “security”, but they got their priorities wrong. Before security, we want something that works:

      • I want to download applications from any source I may want (most likely upstream).
      • When I download an application, I want it to work.
      • When I update my OS, I want old unpatched applications to still work for at least 5 years. 10 if possible.
      • When an application doesn’t work, I want to blame the upstream developer.
      • When an application has an unpatched vulnerability, I want to sue the upstream developer.
      • When I write an application, I want a stable (5-10 years) layer I know I can rely on.
      • When I depend on stuff, I want stability for several years at least, or the ability to pin/vendor/bundle.
      • I am okay with taking blame for bugs from a dependency I bundled: I have control after all.
      • I am okay with taking blame for unpatched vulnerabilities in a dependency I bundled: I’m supposed to watch out for those after all.
      • I am not okay with taking blame for downstream patches I didn’t write.

      Now this is not without disadvantages: if you let people download apps from wherever they want, they’ll end up with malware, which will cause various issue up to and including identity theft, ruin, or even physical death. You’ll also end up with upstream applications failing to patch their vulnerabilities quickly enough. While those disadvantages are worth addressing, they are not worth sacrificing the above. The real solution is to teach people how to get good. (Not just tell them, teach them. The ultimate goal is empowerment, not grief.)

      Back when I had no internet connection, the idea of a Linux distribution with all the software I would likely need, right there on the CD-ROM, was very appealing. Even more so if getting that software to work is hard as a beginner end user. Those distributions cut the work out for me, that was great. Nowadays however, everyone can download what they want, so distributions feel more like middle men, same way Apple’s App Store does. It’s unnecessary for anything other than the core utilities (by which I mean, “stuff without which the OS wouldn’t run the programs I tell it to run”).

      What distributions should be, is a non-exclusive curated collection of packages & programs. But that’s is little different from an application with lots of dependencies. The package maintainers effectively become responsible for the applications they distribute, including bugs and vulnerabilities. They have to patch them to maintain the quality and reputation of their curated collection.

      What distributions actually are, is a walled garden. The walls aren’t very high for sure, but they’re still higher than freaking Windows. When I chose Ubuntu, they kinda own me. I have no easy way to get my programs from elsewhere, be it RedHat, Gentoo, or upstream.

      First, we need to remove the middle men. Then we’ll discuss the finer points of static vs dynamic linking.

      1. 2

        I want to sue the upstream developer

        Are you going to sign a support contract with developers of each program/library you use? If not, the software is – by default – without any warranty. (note the recent CURL and VLC affairs)

        Maybe this is why there are commercial distributions that offer paid support and do this work. …and community distributions, that do similar work without any mandatory payments (and of course, without guarantees).

        1. 2

          The ability to void warranties may be limited by law, so there’s that. More realistically, I would welcome the ability to sign a support contract and have a reasonable certainty that the upstream vendor will actually be able to follow through (as opposed to being tied by the changing API of my OS).

          I don’t know about VLC, but I saw the CURL drama about being accused of… enabling piracy, I guess? That’s different: the software was working as intended, served their end users as intended, and that ended up harming someone else. For instance, if someone were to write a ransomware with my crypto library (whose small size makes eerily suited for), I wouldn’t like being held responsible.

    10. 2

      Let’s switch Gentoo et al to using a modern build system like Bazel that describes the complete dependency graph in a scalable way and be done with it.

      Most of these complains boil down to “90s era perl we use for packaging tooling doesn’t work with 2020s software”.

      EDIT: my perspective is mostly about working on/around Debian, I haven’t tried to maintain Gentoo packages. I think the most promising ideas here are Nix/Guix and Distri.

      1. 4

        Rust and Go are still brittle in nixpkgs, too. All of the problems described in the article are applicable. The fact is that we had to tame their build toolchains, including dependency management, in order to make them compose nicely with packages written in other languages.

        As a relevant example to the article, if one wants to build a Rust extension module for CPython, then one must use buildRustPackage with a custom call to the Python build system (example). This is partially due to Rust and Go not defaulting to C linkage, and also partially due to not using standard GCC-style frontends which would allow for Rust or Go code to be transparently mixed with C code.

        That last point might sound strange, but compare and contrast with C++ or D for older languages, or Nim or Zig for newer languages. When two languages have roughly similar views of the same low-level abstract machine, then compiling their modules together into a single linked application becomes much easier.

      2. 2

        FWIW I believe Gentoo’s portage uses Python.

        Perl up until version 5 has a very strong commitment to backwards compatibility.

        1. 3

          “90s era perl” was intended as a pejorative about an era of programming thinking, not a specific dig about perl. If you remember Perl fondly, we are probably very different ideas ¯_(ツ)_/¯

          1. 2

            Thanks for the clarification!

            I do believe OpenBSD uses Perl for its ports/packaging system, and isn’t interested in changing it right now. While Gentoo probably made the right choice in using Python instead, it’s come back to bite them a bit. A big part of why Gentoo specifically is pissed about the cryptography component/module/library/package[1] introducing a Rust dependency is that Python is a core dependency of portage.

            Maybe it’s time to Linux distros to take a step back and consider whether offering “everything but kitchen sink” to end users is really a good idea anymore.

            [1] I’m on hip to the Python lingo here

    11. 2

      As a user and software developer I am glad I don’t use Gentoo after reading this blog post. Complete lack of understanding of users and people writing the software he maintains.

      1. 3

        Do not believe that this attitude is constrained to Gentoo. Every Linux distro is pulling the same shit.

    12. 1

      Even in the free software category we can distinguish several degrees how much open, transparent and hacker-friendly the software is.

      On one side, there are almost black-boxes and blobs, requiring significant effort to study, change or reuse – due to software complexity, required tooling, computing power etc.

      On the other side, there are programs that are transparent, we can see their parts and internals, they support introspection and can be modified or extended even without recompiling or dealing with complexity of the whole program.

      I really appreciate if I can gather metadata about an program (like dependencies or versions) in a uniform way (regardless the programming language) through standard package manager of my distribution or if I can modify the behavior of the program e.g. by simple LD_PRELOAD hack, improve/fix it by upgrading an library or extend it by installing a new independently-built module (plug-in). I consider such software much more free, open and sane. However even the statically linked blobs (with source codes laying somewhere on the internet) are also open and free, I do not prefer them.

    13. 1

      So basically one can write (is it possible?) a binary scanner that scans versions of the statically linked libs in a binary application (or library) and see if those have any CVSs and exploit.

      1. 2

        Project Zero has looked at this a bit. As I understand it it gets tricky because once you’re statically linking something, the compiler starts mashing up your code with the library code, so it’s not trivial to find library code. They had a 40% hit rate with the algo written up here: https://googleprojectzero.blogspot.com/2018/12/searching-statically-linked-vulnerable.html