This seems like an oversimplification, and an unreasonable one.
This model is true if a dependency has a fixed benefit and a cost that scales linearly with the cost of your app. There probably are some dependencies that work that way, but I doubt it’s all, and I’m not even sure if it’s most. The author mentions just using jQuery, but comparing jQuery and React, both the costs and benefits of React scale with the size of the app.
In other cases, building your own solution might be great when you’re small–you build a solution that works perfectly for your use case, and avoid the complexity of the dependency. But as you grow, you may find yourself wandering into increased complexity. If that happens, you can end up reimplementing the complexity of the dependency, but without the benefit of a community that has already solved some of those problems.
A final issue with NIH is that it can impose a real cost when dealing with turnover and on-boarding new people. The tacit knowledge that current employees has is lost, and new employees will expect things to work like “standard” tools they’ve used elsewhere.
…that was three somewhat pro-dependency paragraphs, but that’s not the whole picture either. The same dynamic that I’ve cited above can go in reverse. There have been times when my judgment is that depencies don’t pull their weight, but for the opposite reason the author thought–I didn’t think we’d be using enough functionality to justify it. Our version might not be as robust, but it would be 10-100x less code, perfectly match our use case, and we could adjust the functionality in a single commit.
The real point isn’t to be pro or anti dependency, but to argue that you need to understand how the costs or benefits of any particular dependency will play out for your project.
The tacit knowledge that current employees has is lost, and new employees will expect things to work like “standard” tools they’ve used elsewhere.
You meant this part as pro-dependency, but I think it can work equally well for anti-dependency.
“Standard” carries a lot of weight here, and assumes that the bespoke tool is harder to learn than the standard one. It also assumes that every current devs, and new ones, are proficient in the standard tool. But if that “standard” tool is React, or Angular, say, this often won’t be the case.
I have personally seen all of React brought in to add a little bit of dynamic functionality to one part of one page. On a team where most people had never used React and about 50 lines of vanilla JS could have done the same job as the 50 lines of React did (and we ultimately did rewrite it to remove React).
It’s not just about the tradeoff between “re-implementation cost” and “learning cost.” It’s about accurately measuring both. But ime the full costs of dependencies are rarely measured correctly. This failure is magnified by the programmer’s love of new toys, and the rationalizations they’ll make to satisfy it.
The tacit knowledge that current employees has is lost, and new employees will expect things to work like “standard” tools they’ve used elsewhere.
An organisation can try to do an industry scale inverse Conway’s maneuver by pushing its internal solution as a “standard.” When it works, the organisation reduces their cost of onboarding!
That may be true in isolation, but you need to factor in two other things: competition and opportunity cost. A company the size of Apple or Google could easily afford to create an in-house GCC competitor, but the money / engineer time spent doing that is money and time spent not doing something else. Apple was very explicit when they started investing in LLVM that they didn’t consider a compiler to be a competitive advantage. Their IDE tooling was, but a toolchain is just table stakes. By investing in an open solution that other companies are also putting money into, they benefit in two ways:
They aren’t paying all of the costs (for a while, Apple was paying for over 50% of LLVM development, that’s dropped off now as others contribute more), so their opportunity cost is lower.
They have the same baseline as their competitors in something that they don’t consider a differentiating feature.
Addendum, since my previous post has a negative tone: I appreciate that the author asked how the cost/benefit analysis changes as a project scales. While I obviously disagree with the generic answer, it’s a good frame and I’ll keep it in mind whenever I try and talk about the subject.
But I can think of another aspect that dampens the effect: the farther the dependency is from your unique selling point, the worse cost/benefit you get from replacing it.
Should a tax accounting app have a custom font parser? Does displaying an icon need beating all existing image decoders? Do games need to maintain their own time zone databases?
Besides unique selling point, I’ve heard the same idea explained as core competencies.
Assume that products are made up of features, and in order to add and support features an organization needs competencies for those features. Does a given feature speak to a “core” competency that gives and reinforces your organization a unique advantage in the marketplace?
Competitive advantages are often described as moats, something difficult or time consuming for competitors to match. How do you deepen the moat in a sustainable way that is time and cost efficient over the lifetime of products and your organization?
This seems like an interesting idea with a lot of merit.
I think it also needs to consider the value of what you’re building, in particular its expected lifetime. If you want something to run for decades, depending on short-lifespan libraries is obviously problematic. Perhaps that’s another way of saying that the amount of engineering worth spending is a function of something’s expected lifetime, which is also probably true.
What’s strange about the web ecosystem to me is people accept and possibly desire a thorough refresh every couple of years, so short lifespan libraries have found a home with short lifespan library users.
I tend to phrase this as “you should own your core competencies”. Over time, as you master a domain and your core broadens, other people’s solutions tend to fail you more and more frequently, and it makes more and more sense to write your own simplified replacements that fit your own well-understood needs much better, and at a cost that appears to be justified based on the longevity of the project.
I came to this conclusion too.
But of course, this doesn’t exaust the debate because it assumes all dependencies have a benefit. When in reality, many of them, sometimes most of them, have a cost.
‘Dependencies’ are not fungible. Some dependencies have a well-defined scope, some dependencies (frameworks) claim to ‘do’ everything. Some dependencies have an API that works for almost any programming style. Other dependencies expect you to work a certain way and will break if you don’t. Some dependencies have stable interfaces. Other dependencies go through versions quickly.
While depencencies of the second kind might have their uses, dependencies of the first kind have the lowest maintenance costs for the people who use them.
Leftpad is actually a very good example of a low-cost dependency, which might be why it was so ubiquitous. The “fiasco” wasn’t due to leftpad, it was due to the fragility of the npm package manager, and the build systems that depended on it.
My rule is to always use something that exists if I can at all manage, but then not be afraid to fix and improve it. Improving a dependency helps me and the world much more than yet another library/service with different bugs.
Are databases considered as dependencies? What about web servers, load balancers, reverse proxies, operating systems? It seems like there is a missing line in the graph. I think it should have “Benefit of a dependency” going downwards over time vs “Benefit of rewriting from scratch” going upward over time.
It sounds as if there’s also a tightness of coupling argument. A web server is a great example here: it’s pretty trivial to swap out Ningx for Apache (or vice versa) without changing any of the rest of your stack. Reverse proxies, by design, are transparent and so this is even more true there. Databases are a lot more interesting because if you’re using some middleware or sticking to standard SQL then you can often swap them out quite easily, but if you’re using a lot of vendor-specific features then this is much more difficult.
This seems like an oversimplification, and an unreasonable one.
This model is true if a dependency has a fixed benefit and a cost that scales linearly with the cost of your app. There probably are some dependencies that work that way, but I doubt it’s all, and I’m not even sure if it’s most. The author mentions just using jQuery, but comparing jQuery and React, both the costs and benefits of React scale with the size of the app.
In other cases, building your own solution might be great when you’re small–you build a solution that works perfectly for your use case, and avoid the complexity of the dependency. But as you grow, you may find yourself wandering into increased complexity. If that happens, you can end up reimplementing the complexity of the dependency, but without the benefit of a community that has already solved some of those problems.
A final issue with NIH is that it can impose a real cost when dealing with turnover and on-boarding new people. The tacit knowledge that current employees has is lost, and new employees will expect things to work like “standard” tools they’ve used elsewhere.
…that was three somewhat pro-dependency paragraphs, but that’s not the whole picture either. The same dynamic that I’ve cited above can go in reverse. There have been times when my judgment is that depencies don’t pull their weight, but for the opposite reason the author thought–I didn’t think we’d be using enough functionality to justify it. Our version might not be as robust, but it would be 10-100x less code, perfectly match our use case, and we could adjust the functionality in a single commit.
The real point isn’t to be pro or anti dependency, but to argue that you need to understand how the costs or benefits of any particular dependency will play out for your project.
You meant this part as pro-dependency, but I think it can work equally well for anti-dependency.
“Standard” carries a lot of weight here, and assumes that the bespoke tool is harder to learn than the standard one. It also assumes that every current devs, and new ones, are proficient in the standard tool. But if that “standard” tool is React, or Angular, say, this often won’t be the case.
I have personally seen all of React brought in to add a little bit of dynamic functionality to one part of one page. On a team where most people had never used React and about 50 lines of vanilla JS could have done the same job as the 50 lines of React did (and we ultimately did rewrite it to remove React).
It’s not just about the tradeoff between “re-implementation cost” and “learning cost.” It’s about accurately measuring both. But ime the full costs of dependencies are rarely measured correctly. This failure is magnified by the programmer’s love of new toys, and the rationalizations they’ll make to satisfy it.
An organisation can try to do an industry scale inverse Conway’s maneuver by pushing its internal solution as a “standard.” When it works, the organisation reduces their cost of onboarding!
After your first sentence, I was expecting a strong, non-nuanced opinion.
But your comment is nicely nuanced and balanced and drills right to the root issue. Thanks!
As org gets big they have more money to consider inhouse solutions.
As for the inverse graph, would it apply to something like git or gcc? You got big enough the benefit of using gcc goes into the negative (or zero)?
That may be true in isolation, but you need to factor in two other things: competition and opportunity cost. A company the size of Apple or Google could easily afford to create an in-house GCC competitor, but the money / engineer time spent doing that is money and time spent not doing something else. Apple was very explicit when they started investing in LLVM that they didn’t consider a compiler to be a competitive advantage. Their IDE tooling was, but a toolchain is just table stakes. By investing in an open solution that other companies are also putting money into, they benefit in two ways:
Addendum, since my previous post has a negative tone: I appreciate that the author asked how the cost/benefit analysis changes as a project scales. While I obviously disagree with the generic answer, it’s a good frame and I’ll keep it in mind whenever I try and talk about the subject.
This rings true in general.
But I can think of another aspect that dampens the effect: the farther the dependency is from your unique selling point, the worse cost/benefit you get from replacing it.
Should a tax accounting app have a custom font parser? Does displaying an icon need beating all existing image decoders? Do games need to maintain their own time zone databases?
Besides unique selling point, I’ve heard the same idea explained as core competencies.
Assume that products are made up of features, and in order to add and support features an organization needs competencies for those features. Does a given feature speak to a “core” competency that gives and reinforces your organization a unique advantage in the marketplace?
Competitive advantages are often described as moats, something difficult or time consuming for competitors to match. How do you deepen the moat in a sustainable way that is time and cost efficient over the lifetime of products and your organization?
This seems like an interesting idea with a lot of merit.
I think it also needs to consider the value of what you’re building, in particular its expected lifetime. If you want something to run for decades, depending on short-lifespan libraries is obviously problematic. Perhaps that’s another way of saying that the amount of engineering worth spending is a function of something’s expected lifetime, which is also probably true.
What’s strange about the web ecosystem to me is people accept and possibly desire a thorough refresh every couple of years, so short lifespan libraries have found a home with short lifespan library users.
I tend to phrase this as “you should own your core competencies”. Over time, as you master a domain and your core broadens, other people’s solutions tend to fail you more and more frequently, and it makes more and more sense to write your own simplified replacements that fit your own well-understood needs much better, and at a cost that appears to be justified based on the longevity of the project.
I came to this conclusion too. But of course, this doesn’t exaust the debate because it assumes all dependencies have a benefit. When in reality, many of them, sometimes most of them, have a cost.
‘Dependencies’ are not fungible. Some dependencies have a well-defined scope, some dependencies (frameworks) claim to ‘do’ everything. Some dependencies have an API that works for almost any programming style. Other dependencies expect you to work a certain way and will break if you don’t. Some dependencies have stable interfaces. Other dependencies go through versions quickly.
While depencencies of the second kind might have their uses, dependencies of the first kind have the lowest maintenance costs for the people who use them.
Leftpad is actually a very good example of a low-cost dependency, which might be why it was so ubiquitous. The “fiasco” wasn’t due to leftpad, it was due to the fragility of the npm package manager, and the build systems that depended on it.
My rule is to always use something that exists if I can at all manage, but then not be afraid to fix and improve it. Improving a dependency helps me and the world much more than yet another library/service with different bugs.
Are databases considered as dependencies? What about web servers, load balancers, reverse proxies, operating systems? It seems like there is a missing line in the graph. I think it should have “Benefit of a dependency” going downwards over time vs “Benefit of rewriting from scratch” going upward over time.
It sounds as if there’s also a tightness of coupling argument. A web server is a great example here: it’s pretty trivial to swap out Ningx for Apache (or vice versa) without changing any of the rest of your stack. Reverse proxies, by design, are transparent and so this is even more true there. Databases are a lot more interesting because if you’re using some middleware or sticking to standard SQL then you can often swap them out quite easily, but if you’re using a lot of vendor-specific features then this is much more difficult.
From TFA:
[Comment removed by author]