1. 5
  1. 4

    It’s worth noting that LLD does the equivalent of putting everything in a --start-group / --end-group block. The original behaviour came from optimising aggressively for memory use. The UNIX linker needed to keep in memory only the set of unresolved symbols and the set of exported symbols. It could then read the table of symbols exposed by each .o in turn, use them to resolve any unresolved symbols (and free any memory associated with that table) and add the definitions to the exported-symbol table. Any object files that don’t expose any symbols that are used are completely ignored.

    This is why traditional UNIX libc implementations put each public function in a separate file. If you don’t reference printf then you won’t get the definition of printf and it, in turn, won’t pull in the definition of vfprintf and so on. Newer systems let you get the same benefit by compiling with -ffunction-sections and -fdata-sections.

    LLD, in contrast, was optimised for speed. Building a table of all defined symbols and all undefined symbols and then using one to match the other requires some memory, but not much relative to program sizes on modern systems, and lets you parallelise a lot of the linking steps. In most cases, this is more user friendly - throw all of your object code at the linker and let it sort it out. In a few cases, it can give different behaviour, though I’ve never seen one of these in real code first hand (I’m led to believe they do exist).