1. 82
    1. 25

      A lot of statistics here that, amusingly, would have been difficult or impossible to gather on a system that had followed this article to its logical conclusion.

      Over half of your libraries are used by fewer than 0.1% of your executables.

      It’s unclear to me that this supports a general conclusion that shared libraries aren’t. 1000 out of 1338 (75%) of libraries are shared by at least two binaries. The median and mean number of sharers per library are 4 and 50 respectively.

      Incidentally, I have fewer shared libraries than ddevault with 724, of which 542 are shared, with median sharers 4 and mean 27.

      Is loading dynamically linked programs faster? Findings: definitely not

      Hopefully not news to anyone.

      On average, dynamically linked executables use only 4.6% of the symbols on offer from their dependencies. A good linker will remove unused symbols.

      On the assumption that everything with 2340 symbols depends on exactly linux-vdso, ld-linux and libc, the average for these alone is 5.3%. There are 535 such binaries using 66761 symbols. I don’t really want to get into measuring the sizes of these symbols, particularly since I can’t get them from the source, but that sounds like at least a few full copies of libc to me. And that’s only the binaries that depend on libc alone.

      Will security vulnerabilities in libraries that have been statically linked cause large or unmanagable updates? Findings: not really

      1 GiB might be a large update for some. But I think the bigger problem here is that it’ll be harder to find the dependents of statically linked libraries, and you’ll be relying on packagers to rebuild those dependents—not a given if you use unpackaged or third-party-packaged software.

      1. 14

        1 GiB might be a large update for some.

        I am actually cloning the git source right now. I started it almost 25 minutes ago and the current status is:

        Receiving objects:  46% (133068/288976), 89.50 MiB | 76.00 KiB/s
        

        Not everyone is on a high-speed link – a large number of people aren’t – and minimizing file size has quite some value in its own right IMO.

        edit: it finished, started at 23:09, finished at 23:41; download size 138M. I have actually forgotten what I wanted to check in the git source now haha

        I do kind of like static linking, but it certainly comes with its own downsides. As with many things, it’s all about trade-offs.

        1. 5

          If you don’t need commit history, use:

          git clone --depth 1 <repo>
          

          it significantly reduces the time and cost of download.

          1. 2

            Yeah, thanks; I use it often. In this case I wanted to get some data on the Perl usage, but I’ve also been planning to do some authorship statistics on various popular open source projects, so I figured I might as well grab the full history.

        2. 4

          Not everyone is on a high-speed link – a large number of people aren’t – and minimizing file size has quite some value in its own right IMO.

          I imagine shipping binary diffs could bring down the size considerably.

          1. 3

            or shipping code diffs and compiling locally

      2. 19

        A lot of statistics here that, amusingly, would have been difficult or impossible to gather on a system that had followed this article to its logical conclusion.

        The ecosystem has been all-in on shared objects for well over a decade and the tooling has been built alongside it. If we went all-in on static linking we might have similar tools for it. Already now I can think of several alternative approaches to determine most of these stats for statically linked programs, though they’re more time-consuming and would be improved by access to a theoretical tooling ecosystem which would exist around static linking.

        It’s unclear to me that this supports a general conclusion that shared libraries aren’t. 1000 out of 1338 (75%) of libraries are shared by at least two binaries. The median and mean number of sharers per library are 4 and 50 respectively.

        Okay, but the number of binaries on my system is 5,688. To translate your figures proportionally, 2 binaries is 0.04%, 4 binaries is 0.07%, and 50 binaries is 0.8%.

        Is loading dynamically linked programs faster? Findings: definitely not

        Hopefully not news to anyone.

        Believe it or not someone actually made this claim on Mastodon while I was preparing this page, which is why I included it. Dynamic linking definitely suffers from some cargo culting.

        1 GiB might be a large update for some

        It’s not an update, it’s the sum of all updates for 2019. And that’s before using differential updates, which could theoretically reduce this to negligible amounts.

        it’ll be harder to find the dependents of statically linked libraries

        This information exists in your package manager. If you’re using third-party or hand-compiled software, aye, this is not going to help you, but it was also never going to help you with vulnerabilities in that software itself.

    2. 11

      In support of the article:

      4496	libc
      4484	linux-vdso
      4483	ld-linux-x86-64
      2654	libm
      2301	libdl
      2216	libpthread
      

      linux-vdso doesn’t count. That one is automatically placed into every application by the kernel, and even statically linked applications can use it. Statically linked Zig binaries, for example, still take advantage of the vdso for clock_gettime (which is the main use case of this vdso).

      ld-linux-x86-64 doesn’t count either. That’s the dynamic linker itself, which is not needed for statically linked programs.

      Finally, libc, libm, libdl, libpthread are all just glibc, that’s the same thing. There’s not really a point of linking them separately.

      So even more to the point of the article.

      1. 1

        This, besides Drew’s rebuttal, may be the most important comment on this story.

    3. 10

      Related post from Rob Pike in 2008: http://harmful.cat-v.org/software/dynamic-linking/

      1. 10

        For the lazy:

        From: Rob Pike robpike@gmail.com

        Subject: mmap and shared libraries

        Date: Wed, 5 Nov 2008 17:23:54 -0800

        When Sun reported on their first implementation of shared libraries, the paper they presented (I think it was at Usenix) concluded that shared libraries made things bigger and slower, that they were a net loss, and in fact that they didn’t save much disk space either. The test case was Xlib, the benefit negative. But the customer expects us to do them so we’ll ship it.

        So yes, every major operating system implements them but that does not mean they are a good idea. Plan 9 was designed to ignore at least some of the received wisdom.

        -rob

    4. 13

      Dynamic linking is crucial for a proper separation between platform and applications, especially when one or both are proprietary. Would Win32 applications that were written in the 90s still run (more or less) on Windows 10 if they had all statically linked all of the system libraries? I doubt it. And even if they did, would we really want to require applications to be rebuilt, on their developers’ release cycles, before their users could take advantage of improvements in system libraries? I think this concern also applies to complex free-software platforms like GNOME. (And platforms that target a broad user base do need to be complex, because the real world is complex.)

      1. 16

        I don’t think it’s a binary choice; in the case of Windows, most applications use the system’s system32.dll, user32.dll, and whatnot, but include other libraries like libwhatnot.dll in the application itself. It’s still “dynamically linked”, but ships its own libraries.

        This is also something that the Linux version of Unreal Tournament does for example: it uses my system’s libc, but ships with (now-antiquated) versions of sdl.so and such, which is how I’m still able to run a game from 1999 on a modern Linux machine.

        I think this kind of “hybrid approach” makes sense, and tries to get the best of both. I think it even makes sense for open source programs that distribute binary releases, especially for programs where it doesn’t really matter if you’re running the latest version (e.g. something like OpenTTD). I think this is also what systems like flatpak and such are doing (although I could be wrong, as I haven’t looked at it much).

        1. 8

          My understanding was that the OP was arguing for a binary choice. I think @ddevault’s reply reinforces that. I actually agree with you about the benefits of a hybrid approach: dynamic linking for platform libraries, static linking for non-platform libraries.

        2. 1

          especially for programs where it doesn’t really matter if you’re running the latest version (e.g. something like OpenTTD)

          Looks like you never played the OpenTTD multiplayer, right? :)

          1. 1

            I didn’t even know there is a multiplayer, haha; I actually haven’t played it in years. It was just the first fairly well-known project that came to mind 😅

            1. 1

              So, to clarify - OpenTTD requires the same version on client and the multiplayer server to participate in game. And it’s pretty strict in that, you can’t even patch the game retaining the same version number. Same thing goes to the list of installed NewGRFs (gameplay extensions/content), but at least this one can be semi-automatically downloaded at clientside before joining.

              1. 1

                Yeah, I assumed as much. I think the same applies to most online games. Still, I can keep using the same old version with my friends for 20 years if it’s distributed as described above, because I want to play it on Windows XP for example, or just because I like that version more (and there are many other applications of course, George RR Martin using Word Perfect 6 is a famous example).

      2. 3

        In the case where an ABI boundary exists between usermode libraries, then a lot of the arguments Drew is making here go away. When that occurs 100% of programs are going to need those dynamically linked libraries, so the benefits of code sharing start to become apparent. (It is true though that dynamically resolving functions is going to slow down program loading on that system compared to one where programs invoke syscalls by index and don’t need a dynamic loader.)

        That said, I think statically linking on Windows is going to offer higher compatibility than you’re suggesting. The syscall interface basically is stable, because any Win32 program can invoke it, so when it changes things break. The reason I’m maintaining my own statically linked C library is because doing so allows my code to run anywhere, and allows the code to behave identically regardless of which compiler is used to generate that code. I’m using static linking to improve compatibility.

        One thing to note about Win32 also is to compare the commit usage of processes when running across different versions of the OS. The result is huge disparities, where new OSes use more memory within the process context. Just write a simple program that calls Sleep(INFINITE) and look at its memory usage. The program itself only needs memory for a stack, but it’s common enough to see multiple megabytes that’s added by system DLLs. Those DLLs are initializing state in preparation for function calls that the program will never make, and the amount of that initialization is growing over time.

        1. 1

          In the case where an ABI boundary exists, you definitely want static linking to ensure the ABI is sound. See https://thephd.dev/intmax_t-hell-c++-c .

          1. 3

            The context here is that I work for Microsoft and so did mwcampbell when he wrote that.

            As he mentioned, in order to allow the operating system to be updated independently from applications, there needs to be a compatible ABI somewhere. It could be between kernel and user, or it could be somewhere else, but it needs to be somewhere. When this type of separation exists, we don’t have the luxury to just statically link, since doing so would result in a combined Operating System+Application bundle that can only run one application at a time. The moment one kernel is running two programs and those three things are compiled independently, there needs to be an agreed upon interface.

            That compatible ABI needs to be designed with compatibility in mind. The article you’re linking to is correctly pointing out that intmax_t is not going to result in a stable ABI, and should not be used where ABI stability is required. Unfortunately since its stated purpose is to provide an interface between the C library and the application, and the C library is dynamically linked, this particular thing failed right out of the gate.

            What’s a bit strange with these articles is that when you work in a space that requires ABI stability, it becomes clear that any interface can be made stable by following a few simple principles. Unfortunately a lot of times those principles aren’t followed, and the result is an incompatible interface, followed by suggestions that the result is an inevitable consequence of dynamic linking. It’s not really possible to use any computing environment today that doesn’t have a stable ABI somewhere in order to allow various components to be updated independently. Heck, I’d argue that a web browser is basically a stable ABI, and the ability to update it without updating the entire web indicates that it’s able to provide a compatible interface.

            What this particular discussion is really about is saying that Windows ends up with compatible ABIs at multiple layers, including the syscall interface, as well as system provided usermode libraries. Anyone working on these edges won’t use something like intmax_t.

            1. 1

              I may not entirely agree with you, but I sure appreciate that context and see your point.

      3. 2

        especially when one or both are proprietary

        Proprietary software is bullshit and can be safely disregarded.

        Would Win32 applications that were written in the 90s still run (more or less) on Windows 10 if they had all statically linked all of the system libraries?

        If Win32 had a stable syscall ABI, then yes. Linux has this and ancient Linux binaries still run - but only if they were statically linked.

        would we really want to require applications to be rebuilt, on their developers’ release cycles

        Reminder that the only programs that matter are the ones for which we have access to the source code and can trivially rebuild them ourselves.

        And in any case, this can be turned around to work against you: do we really want applications to stop working because they dynamically linked to library v1, then library v2 ships, and the program breaks because the dev wasn’t around to patch their software? Software which works today, works tomorrow, and works the day after tomorrow is better than software which works today, is more efficient tomorrow, and breaks the day after tomorrow.

        1. 29

          Reminder that the only programs that matter are the ones for which we have access to the source code and can trivially rebuild them ourselves.

          I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

          If you want to argue “The only programs that respect your freedoms and don’t ultimately lead to the enslavement of their users are the ones for which we have access to the source code”, that’s totally reasonable and correct. By picking hyperbolic statements that are so easily seen to be so, you make yourself a lot more incendiary (and honestly sloppy-looking) than you need to be.

          And maybe coming off as a crank wins you customers, since there’s no such thing as bad press, but don’t be surprised when people point out that you’re being silly.

          1. 1

            I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

            And this is supposed to be evidence that proprietary programs matter and shouldn’t be disregarded? The context in discussion sites like this is that we can decide to change our programming practices for the programs that we have control over. The defining characteristic of proprietary software is that programmers do not have control, so discussion is irrelevant. Bring the production of baseband code into the public sphere and we can debate whether it should be using dynamic linking (I doubt it even does now).

            1. 1

              whoops I only meant to post one version of this comment…. my b

          2. 1

            I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

            So you would like to be able to dynamically link a binary with the microcontroller code in your microwave? Come on. If anything these examples reinforce the point that proprietary programs can be disregarded in discussions like this. I don’t think it’s hyperbolic or silly to say so.

        2. 19

          If Win32 had a stable syscall ABI, then yes. Linux has this and ancient Linux binaries still run - but only if they were statically linked.

          Except Windows and everyone solved this at the dynamic linking level, and this goes far beyond just the syscall staples like open/read, and towards the entire ecosystem. Complex applications like games that APis for graphics and sound are far likelier to work on Windows and other platforms with stable whole-ecosystem ABIs. The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

          That, and Linux (and Plan 9) are the aberration here, not the rule. Everyone else stopped doing this in the 90s if not earlier (SunOS added dynamic linking in the 80s and then as Solaris banned static libc in the early 2000s because of the compat issues it caused). FreeBSD and Mac OS technically allow it, but you’re on your own - when Mac OS changed a syscall or FreeBSD added inode64, the only broken applications were static Go binaries, not things linked against libc.

          That, and some OSes go to more extreme lengths to ban syscalls as static ABI. Windows scrambles syscall numbers every release, OpenBSD forbids non-libc pages from making syscalls, AIX makes you dynamic link to the kernel (because modules can add new syscalls at runtime and get renumbered).

          1. 4

            The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

            Or the 2000s. Getting Loki games like Alpha Centauri to run now is very hard.

          2. 1

            Complex applications like games that APis for graphics and sound are far likelier to work on Windows and other platforms with stable whole-ecosystem ABIs. The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

            There are half a dozen articles about WINE running and supporting old Windows programs better than Windows 10.

            Examples:

            “I have a few really old Windows programs from the Windows 95 era that I never ended up replacing. Nowadays, these are really hard to run on Windows 10.”.

            “Windows 10 does not include a Windows XP mode, but you can still use a virtual machine to do it yourself.”

            I specifically remembering there being a shitshow when Windows 10 came out because many applications straight up didn’t work anymore, that runs under Wine.

            Try again.

            1. 8

              Sure, we can play this game of hear-say, but it’s hard to argue if you have an application from 1993, Windows 10 will almost certainly be likelier to run the binary from 1993 than almost any other OS would be - and it does so with dynamic linking.

              Not to discredit Wine, they do a lot of great, thankless work. I’m more shocked the claim I’m replying to was made, since it seems like it was ignorant of the actual situation with Windows backwards compatibility to score a few points for their pet theory.

              1. 2

                I’m more shocked the claim I’m replying to was made, since it seems like it was ignorant of the actual situation with Windows backwards compatibility to score a few points for their pet theory.

                I’ve never been too interested in Windows as a platform, what I do know is that a whole pile of people in my social group and the social groups I listen to, who use old Windows programs frequently, were ridiculously frustrated that their programs no longer work. And it became a case of “Windows programs I want to run are more likely to work on WINE than they are on Windows”.

                Sure, that has since been mitigated, but that doesn’t change the fact that for a time, WINE did run programs better than Windows. I’m deeply hurt by the idea that you think it was made to score points.

                1. 1

                  I’m deeply hurt by the idea that you think it was made to score points.

                  No, I referred to the parent of my initial comment.

            2. 5

              Would Wine have ever worked if all Windows programs were statically linked?

              1. 5

                Wine does take advantage of dynamic linking a lot (from subbing in Microsoft versions of a library to being able to sub in a Wine version in the first place)

              2. 1

                I think, yes. The more interesting question is, would Wine be easier to write if Windows programs were statically linked. My initial guess is yes, because you can ignore a lot of the system and just sub out the foundations. However, I do know that the Windows team did a lot of really, really abysmal things for the purpose of backwards compatibility, so who knows what kind of monstrosity wouldn’t run on static-Windows Wine simply because of that?

                We’ll never know.

                1. 1

                  How would you even write wine if Windows programs were statically linked? As far as I know, Wine essentially implements the system DLLs, and dynamically links them to each exe. Without that, Wine would have to implement the kernel ABI and somehow intercept syscalls from the exe. It can be done, that’s how gvisor works, but that sounds harder to me.

                  1. 1

                    I am very likely wrong (since they didn’t decide to go this route in the first place) but I feel that it might be easier to do that. The Kernel ABI is likely a much smaller surface to cover and you have much, much more data about usage and opportunities to figure out the behaviour of the call. As opposed to a function that’s only called a handful of times, kernel calls are likely called hundreds of times.

                    Of course, this doesn’t account for any programs that {do/rely on} some memory / process / etc. weirdness. Which I gather probably a lot, given on what Chen put down in Old New Thing

        3. 4

          ancient Linux binaries still run - but only if they were statically linked

          Or if you have a copy of the whole environment they ran in.

          I guess that’s more common in the BSD world — people running FreeBSD 4.x jails on modern kernels.

        4. -9

          Proprietary software is bullshit and can be safely disregarded.

          Ah yes, the words of someone who doesn’t use computers to do anything anyone would consider useful.

          1. 13

            I disagree with @ddevault’s position, but can we please not let the discussion degenerate this way? I do think the work he’s doing is useful, even if I don’t agree with his extreme stances.

            1. -3

              I don’t give leeway to people who are abusive.

              1. 22

                But responding with an obvious falsehood, in such a snarky tone, just causes tensions to rise. Or do you truly believe that nothing @ddevault does with computers is useful?

                I think a more constructive response would be to point out that @ddevault is very lucky to be in a position where he can do useful work with computers without having to use proprietary software. Most people, and probably even most programmers (looking at the big picture), don’t have that privilege. And even some of us who could work full-time on free software choose not to, because we don’t all believe proprietary software is inherently bad. I count myself in the latter category; I even went looking for a job where I could work exclusively on free software, got an offer, and eventually turned it down because I decided I’m doing more good where I’m at (on the Windows accessibility team at Microsoft). So, I’m happy that @ddevault is able to do the work he loves while using and developing exclusively free software, but I wish he wouldn’t be so black-and-white about it. At the same time, I believe hyperbolic snark isn’t an appropriate response.

                1. 12

                  Much of my career was spent writing “bullshit” software which can, apparently, be “disregarded”. This probably applies to most of us here. Being so disrespectful and dismissive of people’s entire careers and work is more than just “incorrect” IMHO.

                  I like the word “toxic” for this as it brings down the quality of the entire conversation; it’s toxic in the sense that it spreads. I don’t want to jump to mdszy’s defence here or anything, and I agree with your response, but OTOH … you know, maybe not phrase things in such a toxic way?

              2. 3

                If I had to add a tag to those comments I’d use ‘idealist’ and that’s not necessarily bad. What do you find abusive in his comments?

              3. 2

                Abuse is about a mixture of effect and intent, and it depends on the scenario and the types of harm that are caused to determine which of those are important.

                I don’t think ddevault’s comment was abusive, because of the meaning behind it, and because no harm has been caused. I think the meaning of “Proprietary software is bullshit and can be safely disregarded” was, “I can’t interact or talk about proprietary software in a useful way, so I’ll disregard it”. The fact that it was said in an insulting form doesn’t make it a form of abuse, especially in context.

                In context, software made proprietary, is itself harming people who are unable to pay for it, and in a deep way. It’s also harming the way we interact with computers, and stifling innovation and development severely. I don’t think insulting proprietary software, which is by far the most dominant form of software, and the method of software creation that is supported by the deeply, inherently abusive system known as “capitalism”, that constantly exploits and undermines free software efforts, can be meaningfully called abuse when you understand that context. And I think people who are so attached to working on proprietary software that they get deeply hurt by someone insulting it, should take a good long introspective period, and rethink their attachment to it and why they feel they need to protect that practice.

                1. 1

                  Proprietary does not mean that it costs money.

                  1. 2

                    Of course not, but monetarily free software that does not provide the source code is worse, because there’s literally no excuse for them not to provide it. They do not gain anything from not providing the source code, but still they choose to lock users into their program, they do not allow for inspection to ensure that there is no personal data being read from the system, or that the system is not altered in harmful ways. They do not allow people to learn from their efforts, or fix bugs in what will soon be an unmaintained trash heap. And they harm historical archival and recovery efforts immensely.

                    Every example of “Monetarily free but proprietary software” that I can think of, either does very, very dubious things (like I-Orbit’s software, which is now on most malware scanners lists), or is old and unmaintained, and the only reason why people use it is because either they’re locked into it from their prior use, or because it is the only thing that does that task. Those people will experience the rug being pulled from under them after a year or two as it slowly stops working, and might never be able to access those files again. That, is a form of abuse.

                    1. 0

                      This is absolutely not as much of a massive societal issue as you make it seem. Perhaps spend your time thinking about more important things.

                      1. 1

                        That’s a nice redirect you have there. Flawlessly executed too, I literally would not have noticed it if I did not have intimate experience with the way abusers get you off topic and redirect questions about their own actions towards other people.

                        Anyway, I’ll bite.

                        I live with two grown adults, neither of which touch computers except when they absolutely have to, and I have observed the mental strain that they go to because programs they spent decades using, and had a very efficient workflow with, have stopped working. I also know dozens of other people who experience the same thing.

                        One of them literally starts crying when they have to do graphics work, which is part of their job as an artist, because there’s not enough time in the day for them to learn newer image editors, and because all of the newer options for use that actually do what they need, are ridiculously intimidating, badly laid-out, and work in unexpected ways with no obvious remedy, and conflicting advice from common help-sources. True, this could (and should) be solved by therapy, but it’s foolish to disregard the part that proprietary software has to play in this. Maybe you just don’t live around people whose main job is not “using a computer”?

                        I do not see what you have invested in proprietary software, such that you feel the need to call someone’s offhand insult against it, “abusive”.

                        1. 1

                          Kindly tell me more about how anyone who isn’t neurotypical has been welcomed with open arms into FOSS communities. I’ll wait.

                          1. 2

                            I myself am a neuro-atypical and queer software developer. Do you want to talk down to me some more?

                            Again you are redirecting the question towards a different topic. The topic we were originally talking about is “Is insulting proprietary software abusive”, and now you want to talk about “Queer and Neuro-atypical acceptance in Free Software communities”.

                            You still haven’t told me how insulting proprietary software is abusive. I’m still very interested in reading your justification for that.

                            Just because the culture that’s grown around free software (and, to be honest, that free software has grown around) is very, very shitty, doesn’t mean that non-free software is good, or something worthy of protection. The culture around free software is fundamentally one of sharing, that’s literally the core tenet. The culture around proprietary software is worse, since it’s literally only about gate-keeping, that’s the only foundation it has. Free software can be improved by changing the culture. There is nothing to change about proprietary software.

                            It’s a real shame that many of the more prolific founders of free software were libertarians, but that is still a mistake that we can correct through social, cultural changes and awareness.

                            Proprietary software is fundamentally an offshoot of Capitalism, and wouldn’t exist without that. It literally only exists under an abusive system, and supports it. The contributions of free software members are preyed upon by capitalist companies for gain, so that they can profit off the backs of those people without giving back.

                            1. 1

                              Fun fact: not once did I say that ddevault is abusive by saying proprietary software is shit. He’s just abusive. I’ve witnessed him be abusive to my friends by saying they’re awful people for using non-free software.

                              Fuck capitalism, fuck ddevault.

                              1. 1

                                Fun fact: not once did I say that ddevault is abusive by saying proprietary software is shit. He’s just abusive. I’ve witnessed him be abusive to my friends by saying they’re awful people for using non-free software.

                                Fuck capitalism, fuck ddevault.

                                Ah! I didn’t pick up on that, sorry!

                                1. 1

                                  I apologize as well.

              4. -1

                Labeling ddevault’s position as abusive is itself absusive, even if you think his position is wrong.

                1. 1

                  I don’t think someone who genuinely believes that someone was being abusive, and calling that out, can themselves be called “abusive”. Abuse is about a mixture of effect and intent, and it depends on the scenario and the types of harm that are caused to determine which of those are important.

                  I don’t think ddevault’s comment was abusive, because of the meaning behind it, and because no harm has been caused. I think the meaning of “Proprietary software is bullshit and can be safely disregarded” was, “I can’t interact or talk about proprietary software in a useful way, so I’ll disregard it”. The fact that it was said in an insulting form doesn’t make it a form of abuse, especially in context.

                  In context, software made proprietary, is itself harming people who are unable to pay for it, and in a deep way. It’s also harming the way we interact with computers, and stifling innovation and development severely. I don’t think insulting proprietary software, that is supported by the deeply, inherently abusive system known as “capitalism”, that is by far the most dominant form of software, and the method of software creation that exploits and undermines free software efforts, can be meaningfully called abuse when you understand that context. And I think people who are so attached to working on proprietary software that they get deeply hurt by someone insulting it, should take a good long introspective period, and rethink their attachment to it and why they feel they need to protect that practice.

              5. [Comment from banned user removed]

                1. 0

                  Okay.

      4. 1

        How many binaries from Windows 95 are useful today? I’m not sure that’s a strong argument.

        Software that is useful will be maintained.

        1. 3

          This is a short-sighted argument. Obscure historic software has its merit, even if the majority of people won’t ever use it.

    5. 7

      I agree with Drew’s general sentiment here, but note that linkers can only remove code at the function level, and the number of functions a module uses is not a great indicator of the amount of underlying code.

      As an example, I maintain a small C runtime library. printf() is a common function for programs to use. But since linking is at the function level, there’s no way to remove code such as format specifiers that the program is not using. Since it doesn’t know what the output device is, code for all potential output devices needs to be included. Since my C runtime runs on Windows, that means character encoding support for UTF-16, UTF-8 and others, as well as VT processing code, including low level console calls.

      I’d expect the same general effect to be present in other libraries, including UI libraries. Even if the program knows it’s not going to perform certain operations on a window, the library is going to create an entire window with all of the data structures to support those operations. Things like C++ are particularly evil because once an object with virtual function pointers is loaded, the compiler is going to resolve those function pointers and all of their dependencies whether they are ever called or not.

      At $WORK this drives me crazy, because we have common static libraries that, when used, can add 300Kb-3Mb of code into a program, even if one or two functions are used.

      1. 10

        You have a good point. The library’s interface basically needs to be designed from the beginning for dead code elimination. One thing I like about newer languages like Rust and Zig, with their powerful compile-time metaprogramming features, is that you can often do this kind of design without sacrificing developer convenience. I suppose the same is true of modern C++ as well. The reason why printf is such a perfect counter-example is that C doesn’t have the language features to allow the developer convenience of printf without sacrificing dead code elimination.

        This reminds me of the last time I played with wxWidgets. A statically linked wxWidgets hello-world program on Windows was about 2.5 MB. I didn’t dig very deeply into this, but it seems that at least part of the problem is that wx’s window procedure automatically supports all kinds of features, such as printing and drag-and-drop, regardless of whether you use them. I suppose a toolkit designed for small statically linked executables would require the application developer to explicitly enable support for these things. And the window procedure, instead of having a giant switch statement, would do something like looking up the message ID in a map and dispatching to a callback. So when an application enabled support for, say, drag and drop, the necessary callbacks would be added to that map.

        1. 3

          Rust’s formatting machinery isn’t very easy to do DCE on either. https://jamesmunns.com/blog/fmt-unreasonably-expensive/

          The formatting machinery has to make the unfortunate call of either heavy monomorphization or heavy dynamic dispatch. If your executable is going to inevitably makes lots of calls to the formatter, the dynamic dispatch approach will result in less code duplication, but it makes it harder to do dead code elimination…

        2. 1

          Tangentially, it is very noticeable in the JS ecosystem that some libs have a lot of effort put into making tree shakers succeed at eliminating their code. By default, not so much.

      2. 3

        I agree with Drew’s general sentiment here, but note that linkers can only remove code at the function level, and the number of functions a module uses is not a great indicator of the amount of underlying code.

        I don’t think that’s the case if you compile with ‘-flto’. I’d assume the code generator is free to inline calls and remove things that can be stripped at the call site.

        1. 2

          BTW, ‘-flto’ is one of the great reasons to use static linking. It can turn suboptimal APIs (those using enum values for setters/getters, like glGet()) into something decent by removing the jump tables.

          1. 1

            Totally agree that link time code generation is a huge improvement in terms of the amount of dead code elimination that can occur. But at the same time, note the real limitations: it can inline getters and setters, and strip code out from a function call with a constant argument of a primitive data type, but can it strip code from printf? What happens with virtual function pointers - is it going to rearrange in memory structures when it notices particular members are never accessed? The real challenge linking has is the moment it hits a condition it can’t resolve with certainty, then all of the dependencies of that code get brought in.

            Put another way, instead of looking at what the linker can and can’t do, look at what actually happens. How large is a statically linked hello world program with Qt? Gtk? wxWidgets? Today, it’s probably fair to ask about a statically linked electron program, which won’t strip anything because the compiler can’t see which branches that dynamically loaded HTML or JS are going to use. What would get really interesting is to use a coverage build and measure the fraction of code that actually executes, and I’ll bet with conventional UI toolkits that number is below 10%.

            It really looks to me that the size and complexity of code is increasing faster that the linker’s ability to discard the code, which is the real reason all systems today are using dynamic linking. Drew’s points about the costs are legitimate, but we ended up dynamically linking everything because in practice static linking results in a lot of dead code.

            1. 2

              Well, printf() is one of those bad APIs that postpone to runtime what could be determined at edit or compile time. But what’s the overhead of printf() in something like musl?

              $ size a.out
                 text	   data	    bss	    dec	    hex	filename
                14755	    332	   1628	  16715	   414b	a.out
              

              I think I can afford printf() and its dependencies being statically-linked.

              1. 1

                Can you afford it with UI libraries? Printf is an example of what can happen - it’s not the only case.

                1. 3

                  Many GUI programs out there bundle a private copy of QT (or even chrome, via electron). Because they do it as a .so, theydo it without dead code elimination.

                  And as we tend towards snaps and flatpacks for packaging open source applications, the practice is spreading through the open source application world.

                  So, empirically, it seems like we decided we could afford it. Static linking just makes it cheaper.

      3. 1

        It’s true that linking to some symbols can have an outsized effect on dead code elimination, stdio being the (in)famous case, but on the whole this is the exception rather than the rule.

    6. 5

      Really good comment on the orange website: https://news.ycombinator.com/item?id=23656173 Basically: static linking does not support any of the modern ELF features, you can’t have LD_PRELOAD, you can have accidental conflicts..

      And by the way, wrt. performance and LTO: everything where performance matters – i.e. where there’s a lot of jumping between components – is usually already statically linked together. For example in Firefox, all the DOM/CSS code is in libxul together, with cross-language LTO even. But Firefox would happily use a system wide shared object libjpeg, because the libxul-libjpeg boundary is crossed like a couple times per jpeg, and there isn’t anything to inline between an app and the isolated world of libjpeg.

    7. 3

      The security arguments against static linking aren’t about managing vulnerabilities. It’s about things like the lack of ASLR:

      https://www.leviathansecurity.com/blog/aslr-protection-for-statically-linked-executables

      1. 5

        You can do ASLR / PIE executables with statically linked programs. According to this article, it’s statically linked glibc that’s the issue, not statically linked programs in general. Here’s a proof of concept of statically linked PIE executables with zig. It didn’t land upstream yet, but it works fine.

    8. 3

      Aha, a data science problem! A pretty minimal look at it though. It’s an interesting question though, I might dig into it soon myself.

    9. 2

      That’s all cool and all but my biggest concern with statically linked binaries is: How does ASLR even work? What mechanism can a static binary do to make sure the libc it shoved into itself isn’t predictably located?

      1. 5

        Look into static PIE. gcc has had support for a few years now, and musl even before that (musl-cross-make patched gcc before support was upstreamed in version 8).

      2. 2

        Does ASLR work?

    10. 1

      For me, probably the main use of dynamic linking is just plugin systems.