1. 30
  1.  

  2. 5

    The author’s opinion is that Redundancy is a lesser evil than Dependency.

    It’s not clear how far the author is willing to go, however. Do they, for example, maintain their own C compiler?

    My opinion is that one should be cognizant of the amount of support behind a dependency and the size of its adoption (which are often related). One HAS to depend on external libraries otherwise we’d never get anything done before everyone else finished their code. But we can be judicious in our choice of libraries.

    So, in my opinion, Dependency is a a lesser evil than Redundancy and you must do your due diligence to minimize the probability that a dependency is going to go south. Even then, you can then turn a dependency into a redundancy by forking the code and maintaining your version IF you can not tolerate where the library is going.

    1. 9

      Do they, for example, maintain their own C compiler?

      Back in the day, the Excel team did exactly this. They were one of the few teams at MS at that time to ship on time and they believed thoroughly in “find the dependencies and remove them”.

      Even then, you can then turn a dependency into a redundancy by forking the code and maintaining your version

      This assumes that the dependency is open source and forkable. In corporate settings, many dependencies are not. Lots of shops buy binary only libraries, or use an SOA approach in which your team can be HIGHLY dependent on the success or failure of other teams.


      I myself fall on the line of “no dependencies that we couldn’t fork and maintain”. Which applies a lot of checks. Does it have a license we can stomach? Does it have a codebase we can understand? Does it have a maintainer who minimizes these risks? etc…

      1. 2

        I have been guilty of editing binaries using Emacs before shipping them; and of course you have things like fixing bugs in the E.T. game without source, but that kind of thing is relatively tricky.

      2. 5

        For myself, I’ve noticed that the older I get the more I swing in the author’s direction. I’ve known a few old programmers nearing retirement that have a long list of very impressive accomplishments. The older and more accomplished they get, the more they prefer redundancy over dependency. The oldest and most accomplished will write their own load balancers, TCP stacks, loggers, everything if need be. Are they on to something? Are they just old and stuck in their ways? Only experience can tell ;)

        1. 5

          Learning to program doesn’t make other people’s code less buggy, although it improves my chances of noticing the bugs. Learning to program doesn’t make other people’s shitty APIs easy to use, but it reduces my patience for them. Learning to program doesn’t make other people’s slow-as-fuck code run fast, unless I fix it. I don’t know how to program yet, but I feel like after thirty-some years, I’m starting to get close.

      3. 4

        In reality, I think a middle ground isn’t so bad. Often the problem with a library is its awkward interface. Assuming a library has all of the functionality needed, just writing a better interface around it often goes a pretty far distance.

        But I also tend towards redundancy rather than dependency, at the application layer. Not because I want to rewrite things but the author is right, there is a supreme shortage of high quality libraries out there. And the industry doesn’t really support high quality software either. Multiple times on lobste.rs, which I would consider a fairly high quality group of people, a retort to a suggestion as been “the latest commit is not recent”. But high quality software often needs no changes because it has solved a well-defined problem and modifying it could only ruin that. The end result is that the software industry, not intentionally, promotes making low quality software by always wanting it to change.

        1. 3

          The best solution is always somewhere in the middle ground. You might have a 20 line project that calls into 10 libraries with 100KLOC each. There isn’t much point in writing that functionality from scratch to add to your 20 lines just so you can be dependency free. Alternatively you might have a kitchen sink library with just 5 dependencies. You might get to a point where you need a 6th library but it makes more sense to implement it in your own code instead because it pulls in too many other dependencies, or the functionality is core to your business, or you need some custom behaviour, or it has an incompatible license. I don’t like these black and white, “everyone must do it this way no matter what” articles.

          1. 3

            I agree high-quality software often doesn’t need to change much, but having literally no commits or releases in years is still a negative sign to me, because it tends to indicate lack of routine maintenance to avoid bitrot. This is especially important if the library itself has dependencies on other libraries, or interfaces in nontrivial ways with the operating system. It’s perfectly fine if the project is in purely maintenance mode though, making routine but minor releases.

          2. 4

            There are costs with either approach, but for a given problem which is preferable is dependent on all sorts of contingencies. The difficulty of the problem, the existence and usability of a package manager, the expressiveness of the language you’re using, quality of the dependency, and the possibility of forking/maintaining a third-party dependency all should factor into a wise developer’s decision.

            Here, I see a lot of complaints about the drawbacks of third-party dependencies (along with a lot of loaded, meaningless language like “trash can full of toxic waste”) but not a very smart breakdown of how frequently these drawbacks occur or how to evaluate or mitigate them.

            Further, as another commenter points out, the author takes a very narrow view of what’s a dependency. People claim they have zero- or low-dependency software, but in the big picture it’s hard to characterize a 10kloc program that depends on millions of locs of kernel, compiler, language runtime, standard libraries, and hardware drivers in such a manner.

            1. 6

              Broadly, I absolutely agree that any solution to the question is very contingent.

              The author’s anecdotes in many ways actually show to me how the challenge of finding the best solution to a simple problem that occurs along the way can distract from finding a workable to the messier tasks that usually are the official job. And, ironically, solving the problem of redundancy vs dependency itself is often in this category of “more interesting than the assigned task”.

              The thing is that I think experienced programmers deal with the distraction (that comes from problem like this) through folk-wisdom rules of thumb, rules that don’t need to be at all optimal but which, by getting past and bottling up the small and seductively interesting problems, let one accomplish one’s immediate task.

              But ironically, this same “folk effect” tends to make these questions seem uninteresting to consider more deeply beyond the solutions of offered by the obsessed. And think that’s a shame because this kind of problem might open some unexpected doors if examined closely.

              1. 3

                Thanks. Insightful point in that last paragraph.