1. 8
  1. 3

    You just outsource the looking and deciding-to-leap to something else. Classic moving-complexity-elsewhere.

    1. 12

      You’re outsourcing package management to a package manager, that seems a fair trade to me, personally

      1. 7

        Sure, though I’m not sure why this makes it sound like moving complexity around is a bad thing and not the goal?

        There’s a lot of intrinsic complexity that we just have to manage, and a fair bit of the ~progress in all of computing entails adding to the total amount of complexity in order to move intrinsic complexity around to make it easier to manage.

        Dependency checks make the script more complex, fail, and then outsource the complexity of obtaining the dependency on the monkey. I think it’s a net win (for quite a few reasons) to move ~packaging/dependency complexity out of a tricky language and outsource it to software designed to manage that kind of complexity.

        1. 2

          This is a huge complexity trade off, in order of magnitude.

        2. 2

          It would be interesting to have a parser (Oil?) look at all the statically resolvable names and issue a failure at load time. That still has a racing bug, but it would work for 99% of scripts.

          1. 1

            As a shopt or something? It’s definitely possible (at least with Oil/OSH, since this is more or less what resholve is already doing as a separate tool, with Oil’s OSH parser). Not sure if Andy would be open to it.

            99% is probably bullish, though. I haven’t really tried to reason about how to run a ~fair survey of wild Shell, but there are plenty of common external commands that can run other executables. The Shell can handle big builtins like type/command, but there’s still biggies like sudo, xargs, find, and a surprising number of others.

            1. 2

              I’m saying 99% could avoid a false negative. You’re saying the rate of false positive would be higher. Different criteria.

          2. 2

            I actually prefer the simplicity of the first solution, if it wasn’t for the nix shell performance hit.

            I’m not a fan of generating scripts, even if done automatically. While reproducible, embedding the store paths also brings its own maintenance questions.

            Nix has grown on me, and I finally see the light, but it seems a heavy dependency to me for shell scripts. For shell scripts I’ve been focused lately on limiting myself to posix capability, and for this I’d need nix.

            1. 1

              I actually prefer the simplicity of the first solution, if it wasn’t for the nix shell performance hit.

              I do too (when the scripts are similarly simple).

              embedding the store paths also brings its own maintenance questions.

              How do you mean?

              Nix … seems a heavy dependency to me for shell scripts. For shell scripts I’ve been focused lately on limiting myself to posix capability, and for this I’d need nix.

              It is a heavy dependency. I waffle on how much to suggest it to projects (more comfortable suggesting it to users), but it also isn’t unheard of for projects with portability issues to distribute a container (or for example insist non Linux users run it in a VM). Nix is perhaps more onerous than docker and a container or a VM stack since many people would have to install it–but I think it’s lighter than those in resource terms.

              1. 1

                The maintenance questions I see around embedding the store paths revolve around updating those store paths. I guess if you’re always generating the script before running it, that problem is moot at the cost of the generation step. If you’re committing the generated script and/or providing it to others, how do you handle updating the software for versions/security? I guess you just re-generate and maybe use flake.lock for dependency pinning?

                I’m not sure I consider nix lighter than a container. Processing a derivation and downloading all of the dependencies is a non-trivial amount of resources. Clearly it’s lighter than running a whole VM.

                Personally I’m of the opinion that shell scripts should always be simple. So that’s why I favor the first solution as it’s lightweight and doesn’t add a hard nix dependency. For anything more complex I’d reach for a different language.

                1. 1

                  (Don’t feel obliged to read all of this :)

                  Personally I’m of the opinion that shell scripts should always be simple. So that’s why I favor the first solution as it’s lightweight and doesn’t add a hard nix dependency. For anything more complex I’d reach for a different language.

                  Seems like a fair razor :)

                  Aside: in two of the other packaged Nix cases (the non-inline/interpolation versions) you could still have a “normal” script laying around that wouldn’t technically require Nix (and you don’t have to get rid of the dependency checks).

                  The maintenance questions I see around embedding the store paths revolve around updating those store paths. I guess if you’re always generating the script before running it, that problem is moot at the cost of the generation step.

                  I skipped a lot of detail because there are several different ways we might use the Nix-packaged version. I’ll sketch out how you might use it as a system/user script just in case it helps?

                  You’d generally include the script package (or a broader package including the script) in your system or user profile. Then you’d just invoke it from PATH. This should keep working because Nix will hold on to its old dependencies since the script’s package has active references to them. Nix would re-generate the script whenever you rebuild that profile (whether just to update the dependencies, or because you’ve made changes–but only if its dependencies had changed). Some of the specifics shift with flakes, but the big concepts are the same. In this case, the script package probably won’t need much specific maintenance unless it needs distinct dependency versions.

                  Here’s a real-world version of this:

                  If you’re committing the generated script

                  You wouldn’t commit the generated script (since others wouldn’t necessarily have these paths). Not yet, at least–the idea of a ~bundled version is on my mind (but there are bigger fish to fry for now, as they say).

                  and/or providing it to others, how do you handle updating the software for versions/security? I guess you just re-generate and maybe use flake.lock for dependency pinning?

                  I haven’t actually started using flakes yet, but yes–I think you’d run nix flake update. Here’s an example of the ~older style: https://github.com/grahamc/nix-channel-monitor/blob/10ac2c5d12f11674ac583deea3d7fd971939a2d1/default.nix#L3.

                  FWIW, this same concept still applies to the nix-shell shebang approach (though the pin looks a little different, like https://github.com/functionally/pigy-genetics/blob/01092a73455663b0a4a64ecc43bc8689846ccc50/publish.sh#L2-L3)

                  I’m not sure I consider nix lighter than a container. Processing a derivation and downloading all of the dependencies is a non-trivial amount of resources. Clearly it’s lighter than running a whole VM.

                  That’s fair, too. Nix is resource intensive around building and downloading.

                  I’m a little more focused on runtime resource use. Since starting to use Nix on my main dev macbook in early 2019, I’ve weeded out all of my active dependencies on VMs and containers for development work and replaced them with native software. (I still have some old projects that I haven’t touched in this timeframe that I’ll still have to convert if I get a chance to bring them back to the front burner…).