This is part of why I still like to outsource this job to Debian, as much as possible. As language-specific package managers have proliferated, that seems to be falling out of fashion. And I can see why people sometimes want faster updates than Debian provides, or a wider range of packages, or a different set of tooling that more closely integrates with a particular language’s ecosystem, etc. But as a solo developer, sorting out dependencies is a pretty big job for me to do alone, so I really appreciate being able to rely on Debian figuring out whether libfoo3, which transitively depends on libbar6, should be upgraded to depending on libbar7 when it comes out or not (and if yes, whether it needs any kind of adaptation).
I do still have to manage the “top-level” dependency, which my project directly uses— deciding when to bump my depedency to a more recent version, and test that that works. But once that’s chosen, deciding what to do about all the transitive dependencies, running tests, investigating bugs and incompatibilities, etc., is kindly done by the Debian maintainers, which makes the task on my side a lot more manageable.
100% with you on the Debian. I am not at all happy with the one package manager per programming language situation. I am old and crotchety and I’m tired of how every programming language keeps rediscovering just how fucking hard packaging and software distribution is. It really deserves to be elevated to hardest problem in computer science, ahead of cache invalidation and naming things.
Conan trusts nothing but cold, Cimmerian steel and Debian packages.
If you’re running production Node apps, you really should be using npm shrinkwrap to lock down your entire dependency tree.
I think that only updating dependencies when you have an explicit need is risky. That could leave you years behind, and then it’s much harder to upgrade as you’ve a far wider changeset to deal with - with a common outcome being to stay stuck on the old version forever.
Regularly updating dependencies (maybe every few months) keeps you on top of changes in manageable chunks. It also means you catch regressions soon enough that there’s a chance they’ll get fixed, as there’s a risk that others will come to depend on the regressions. For example this caught me out on https://github.com/ansible/ansible/pull/9620, and the setup is now difficult to ever upgrade.
There is a difference between regularly (that you propose) and automatically (which greenkeeper etc do, afaik).
Greenkeeper does send you a pull request that you might or might not merge. It’s more of a fancy notification service that integrates into your toolchain through CI. You can still do a “regular” workflow with it.
It’s not so much to never do it, but to do it at the correct time in your release cycle, and to do it willfully and with attention.
This is why the Haskell community enforces upper and lower bounds. Much of the difficulty people had with Cabal is that they’re used to dependency management that doesn’t enforce anything. (Cf. Maven, Ivy)
There’s easier ways these days in Haskell, what with snapshots (set of packages known to build together properly) and all, but you still need to think at least for a moment about it.