What I am doing with my Ocaml projects is:
I have a ‘pins’ repository that represents the exact version of dependencies that my “organization” uses. Kind of a like a monorepo in the sense that everything will use the same set of deps.
Every repo I develop has two jenkins jobs. One of them runs when either the code for that repo or the pins repo is modified and it compiles and tests the code against the pins. The other job runs the code against the latest releases of all dependencies.
I do it this way so that I can see when upstream has changed in a way that affects me but it doesn’t hold-up development. I can get an inventory of what’s broken and needs updating. I have been fairly pleased with this, so far. It found a bug recently where the actual testing framework I use had a bug introduced and my pinned jobs passed and my bleeding edge ones failed.
So this drives me nuts because if your Node project depends on dependency A, but dependency A lists a version range for dependency B, npm will update B to the latest version available when you rerun “npm install”, even if you have an exact version listed for A. You can solve this with shrinkwrap but in some cases you want to check you can blow away your shrinkwrap and reinstall and get the same file back out.
We usually worked around this by forking A and locking down its dependencies.
My understanding is that yarn and its lock file do a better job with this than npm shrinkwrap.
That is actually the most compelling reason to use yarn, that it produces a sane lock file.
For a long time, I thought the way rubygems/bundler did it (producing a lockfile with resolution results) was ideal.
I’ve since come around to the idea that it’s even better to do that and also vendor your source dependencies (like eg gb for go). The primary reason for this is that it shrinks your network dependency footprint; if you have the code checked out, you can run the app (which makes e.g. CI more reliable).