1. 32
  1.  

  2. 10

    Dynamic linking wasn’t invented by idiots, it has its advantages and drawbacks just like any other technology.

    Where static linking is best, Plan 9 uses static linking, just like every other OS. Where dynamic linking is best, Plan 9 uses IPC over the 9p protocol, a convention wildly more powerful and flexible than traditional dynamic linking, but also more expensive.

    1. 3

      Where static linking is best, Plan 9 uses static linking, just like every other OS.

      Solaris 10 doesn’t support static linking libc.

      At least NetBSD-side, no binary in normal use is static. even /sbin/init, the first executable to run, is dynamic. Some dynamic executables have libraries statically linked into them though.

      1. 3

        illumos inherited that property from Solaris, too. We don’t have symbol versioning generally, though; just library versioning.

      2. 3

        Dynamic linking wasn’t invented by idiots, it has its advantages and drawbacks just like any other technology.

        I tell you what, though: you wouldn’t know it from the Plan 9 folks’ rhetoric on the subject, passed down as it was to a lot of the Go folks.

      3. 7

        NixOS has an interesting twist on dynamic linking. It changes all the library paths into content-addressed absolute paths, so the dynamic linking is actually statically determined. This way programs can share libraries without risking any confusions, and any number of library versions can be “installed” at the same time, used by different programs.

        1. 3

          The article focusses on two potential advantages of dynamic linking:

          1. The ability to save disk space when two programs share a library.
          2. The ability to fix a library without rebuilding the programs which depend on it.

          As I understand it, the way Nix does it only achieves 1. Relinking against a later version of a library isn’t possible because the program has the full path of the library baked in. As a result, even a trivial change to the library will require a rebuild of all its dependencies (because the hash in the store path will change).

          Given that the original article claims that the benefits of 1 are minimal, it would be interesting to take a set of packages and compare the sizes of all packages, including dependencies, against a similar collection of statically linked packages.

        2. 5

          Its really not practical to do a chromium rebuild for every small update. Symbol versioning is annoying and Void Linux started to make every package that is build against glibc to depend on glibc>=buildversion because partial updates are allowed but versioned symbols break all the shared library checks.

          1. 9

            In practice, package builds already do a chromium rebuild for every small update. Developers do incremental builds regardless of the method of linking.

            Really, the reason to build Chrome with shared objects is that the linker will fall over when building as a single binary with debug info – it’s already too big for the linker to handle easily. The last time I tried to build Chrome to debug an issue I was having, I didn’t know you had to do some magic to build it in smaller pieces, so the linker crunched on the objects for 45 minutes before falling flat on its face aborting. I think it didn’t like 4 gigabyte debug info sections.

            Also, keep in mind that this wiki entry is coming from a Plan 9 perspective. Plan 9 tends to have far smaller binaries than Chromium, and instead of large fat libraries, it tends to make things accessible via file servers. HTTP isn’t done via libcurl, for example, but instead via webfs.

            1. 2

              That separation also means you can rebuild webfs to fix everything using it without rebuilding them, which is what shared libraries were supposed to help with.

            2. 6

              Well, I feel like that’s the only way to handle it in Void really.

              Anyway, I’d trade disk space for having static linked executables every day. Must be why I love Go so much. But I still understand why it’s used, both for historical and practical reasons. This post showcases the difference between static and dynamic cat but I’m scared of what would happen with something heavy with lots of dependencies. For example qt built statically is about 2/3rd of the size.

              1. 4

                If the interface has not changed, you technically only need a relink.

                1. 3

                  If you have all the build artifacts laying around

                  1. 3

                    Should a distribution then relink X applications pushing hundreds of megabytes of updates or should they start shipping object files and link them on the user system where we would basically imitate shared libraries.

                    1. 6

                      One data point: openbsd ships all .o for the kernel, which keeps updates small.

                      (I don’t think this is actually new. Iirc, old unix systems used to do the same so you could relink a modified kernel without giving away source.)

                      1. 3

                        That’s how SunOS worked, at least. The way to relink the kernel after an update also works if you have the source too; it’s the same build scaffolding.

                        1. 2

                          Kernel, but not also every installed port or package

                        2. 3

                          It would be viable to improve both deterministic builds and binary diffs/delta patches for that. With deterministic builds you could make much better diffs (AFAICT) since the layout of the program will be more similar between patches.

                          1. 4

                            Delta updates would be nice a nice improvement for the “traditional” package managers. Chrome does this for its updates, but instead of just binary diffs, they even disassemble the binary and reassemble it on the client. http://dev.chromium.org/developers/design-documents/software-updates-courgette

                            1. 2

                              What do you mean by delta updates? What should they do differently than what delta RPMs have been doing until now?

                              1. 1

                                Yes maybe this, not sure how delta rpms work specifically, do they just diff files in rpms or are those deltas of each binary/file inside of the rpm?

                                1. 1

                                  They ship new/changed files, and I think they also do binary diffs (at least based on what this page says)

                          2. 1

                            Chrome already ships using binary diffs, so this is a solved problem.

                            1. 0

                              where we would basically imitate shared libraries.

                              Except without the issue of needing to have just one version of that shared library.

                              1. 2

                                Proper shared libraries are versioned and don’t have this issue.

                        3. 4

                          Debian’s glibc has been fixed, updated and released 77 times in the last 4 years (counting from version 2.21-1). I’m kinda happy that I did not had to redownload 77 times the 6 GB that live in my /usr directory.

                          ASLR is also nice to have.

                          1. 3

                            Actually… I haven’t noticed this sort of DLL Hell being much of a problem on Linux the last 5-10 years(?) or so. Does anyone know why? Steam and some other things distribute binaries that work fine on many versions of Linux, as do Discord, Skype, GOG, and various other things I use semi-regularly, and it’s been a long time since I’ve had issues with them. And while lots of these binaries are distributed with their own .so files, Windows style, they all still have to talk to the same-ish version of glibc.

                            Is it just that everyone targets some version of Ubuntu and I use Debian so it’s always just Close Enough? Is it that the Debian maintainers put a lot of work into making stuff work? Is it that the silly glibc symbol versioning actually does what it’s intended to do and makes this not a problem? Or does glibc just not change terribly fast these days and so there are few breaking changes?

                            1. 6

                              Probably because you’re using Debian (stable or testing) which isn’t that different from Ubuntu.
                              I’ve been using a rolling distro (that has latest version of everything except kernel) and it happened quite a few times that I’ve had to symlink 100 .so by hand to make something run, download and unpack stuff from other distros’ repos, or just give up.

                              I get why devs are using AppImage and users are using Flatpak/Snap.

                              1. 4

                                I get it when trying to use binaries not from a native package manager or built by me on that exact system. Eg moving compiled tools onto shared webhosts and getting old compiled programs to work.

                                1. 3

                                  Because some other hard working people (package maintainers) / machines (build farms) rebuild everything for you.

                                  1. 1

                                    Update: this afternoon I was bitten by a program crashing because the official Debian release of python-imagemagick tried to link against some symbol that the official Debian release of the imagemagick shared library no longer exported… So I suppose Eris has punished me for my hubris.