1. 1

    I have been doing this for a very long time now, all my work computers, personal stuff do this. The only problem I encountered is the kernel-install stuff doesn’t seem to work for Ubuntu, so if anyone has knowledge on how to automate this on Ubuntu, I’m curious to learn.

    1. 2

      One of the biggest problems with this stream based composibility is similar to another problem that also is on the front page. It’s possible to conceive of edge cases in the exchange of these streams that can cause catastrophic failure. Not sure what can be done from a GUI standpoint. I think just having a live visualization of changes would be a good first step, but I’m not sure things that are naturally code-related can ever be made into a gui. Maybe something like Node-RED sort of fits?

      Also, since no one else is saying it; you probably want to avoid linking to code that uses blatantly anti-semetic slur. Even the body of your article mentions an irc bot named “ZyklonB.”

      1. 3

        I probably want to, on the other hand it’s just the result of a joke on letter iteration. git filter-branch because of something so silly? Shrug. Though I might have just come up with even stupider names for the bunch.

        1. 1

          I’m not sure what you mean, since those attacks appear HTTP/2 specific, and don’t relate to HTTP 1, which is also based on text streams. Example?

        1. 14

          What’s going on here? How did this get to the top of lobste.rs with 26 upvotes? I’m happy for the OP that they could get their system to work, but as far as I can tell, the story here is “package manager used to manage packages.” We have been doing that for decades. Is there any way the community can get a lever to push back on thin stories like this one?

          1. 25

            Would it change your opinion if the article mentioned that the nix shell being used here is entirely disposable and this process leaves no mark in your OS setup? Also that even if this required some obscure versions of common system dependencies you could drop into such a shell without worrying about version conflicts or messing up your conventional package manager?

            I agree that the article is thin in content, but I don’t think you can write this story off as “package manager used to manage packages.” , I think nix shell is very magical in the package management world.

            1. 6

              I could do that with docker too and it would not leave a trace either

              1. 17

                Yes, but then you’d be inside a container, so you’d have to deal with the complexities of that, like mounting drives, routing network traffic etc. With nix shell, you’re not really isolated, you’re just inside a shell session that has the necessary environment variables that provide just the packages you’ve asked for.

                Aside from the isolation, the nix shell is also much more composable. It can drop you into a shell that simultaneously has a strange Java, python and Erlang environment all compiled with your personal fork of GCC, and you’d just have to specify your GCC as an override for that to happen.

                1. 4

                  I get that, but I have to go through the learning curve of nix-shell, while I already know docker, since I need it for my job anyway. I am saying that there are more ways to achieve what the article is talking about. It is fine that the author is happy with their choice of tools, but it is very unremarkable for the title and given how many upvotes that article got.

                  1. 5

                    Why not learn nix and then use it at work as well :) Nix knows how to package up a nix-defined environment into a docker container and produce very small images, and you don’t even need docker itself to do that. That’s what we do at work. I’m happy because as far as I’m concerned Nix is all there is and the DevOps folks are also happy because they get their docker images.

                    1. 3

                      I work in a humongous company where the tools and things are less free to choose from atm, so even if I learned nix, it would be a very tough sell..

                2. 3

                  As someone who hasn’t used Docker, it would be nice to see what that looks like. I’m curious how the two approaches compare.

                  1. 6

                    I think that the key takeaway is that with Docker, you’re actually running a container will a full-blown OS inside. I have a bias against it, which is basically just my opinion, so take it with a grain of salt.

                    I think that once the way to solve the problem of I need to run some specific version of X becomes let’s just virtualize a whole computer and OS because dependency handling is broken anyway, we, as a category simply gave up. It is side-stepping the problem.

                    Now, the approach with Nix is much more elegant. You have fully reproducible dependency graphs, and with nix-shell you can drop yourself in an environment that is suitable for whatever you need to run regardless of dependency conflicts. It is quite neat, and those shells are disposable. You’re not running in a container, you’re not virtualizing the OS, you’re just loading a different dependency graph in your context.

                    See, I don’t use Nix at all because I don’t have these needs, but I played with it and was impressed. I dislike our current approach of just run a container, it feels clunky to me. I think Docker has it’s place, specially in DevOps and stuff, but using it to solve the I need to run Python 2.x and stuff conflicts with my Python 3.x install is not the way I’d like to see our ecosystem going.


                    In the end, from a very high-level, almost stratospheric, point-of-view: both docker and nix-shell workflow will be the developer typing some commands on the terminal, and having what they need running. So from a mechanical standpoint of needing to run something, they’ll both solve the problem. I just don’t like how solving things by doing the evergreen is now the preferred solution.

                    Just be aware that this is an opinion from someone heavily biased against containers. You should play with both of them and decide for yourself.

                    1. 3

                      This comment is a very good description of why I’ve never tried Docker (and – full disclosure – use Nix for things like this).

                      But what I’m really asking – although I didn’t make this explicit – is a comparison of the ergonomics. The original post shows the shell.nix file that does this (although as I point out in another comment, there’s a shell one-liner that gets you the same thing). Is there an equivalent Dockerfile?

                      I was surprised to see Docker brought up at all because my (uninformed) assumption is that making a Docker image would be prohibitively slow or difficult for a one-off like this. I assumed it would be clunky to start a VM just to run a single script with a couple dependencies. But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)

                      1. 4

                        But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)

                        I think containers is a perfectly capable solution to this. The closest thing you can use would probably be toolbox.

                        https://github.com/containers/toolbox

                        It would allow you to even provide a standardized environment which would be decoupled from the deployment itself (if that makes sense). It also mount $HOME as well.

                        1. 3

                          I use Nix, but also have experience with Toolbox.

                          I would recommend most people to use Toolbox over nix-shell. With toolbox you can create one-off containers in literally seconds (it’s two commands). After entering the container you can just dnf install whatever you need. Your home directory gets mounted, so you do not have to juggle with volumes, etc. If you need to create the same environment more often, you can create a Dockerfile and build your toolbox containers with podman. The upstream containers that Fedora provides are also just built using Dockerfiles.

                          The post shows a simple use case, but if you want to do something less trivial, it often entails learning Nix the language and nixpkgs (and all its functions, idioms, etc.). And the Nix learning curve is steep (though it is much simpler if you are familiar with functional programming). This makes the toolbox approach orders of magnitude easier for most people - you basically need to know toolbox create and toolbox enter and you can use all the knowledge that you already have.

                          However, a very large shortcoming of toolbox/Dockerfiles/etc. is reproducibility. Sure, you can pass around an image and someone else will have the same environment. But Nix allows you to pin all dependencies plus the derivations (e.g. as a git SHA). You can give someone your Nix flake and they will have exactly the same dependency graph and build environment guaranteed.

                          Another difference is that once you know Nix, it is immensely powerful for defining packages. Nix is a turing-complete functional language, so nixpkgs can provide a lot of powerful abstractions. I dread every time I have to create/modify and RPM spec file, because it is so primitive compared to making a Nix derivation.

                          tl;dr: most people will want to use something like Toolbox, it is familiar and provides many of the same benefits as e.g. nix-shell (isolated, throw-away environments, with your home directory available). However, if you want strong reproduciblity across systems and a more powerful packaging/configuration language, learning Nix is worth it.

                        2. 3

                          A cool aspect of Docker is that it has a gazillion images already built and available for it. So depending on what you need, you’ll find a ready-made image you can put to good use with a single command. If there are no images that fill your exact need, then you’ll probably find an image that is close enough and can be customised. You don’t need to create images from scratch. You can remix what is already available. In terms of ergonomics, it is friendly and easy to use (for these simple cases).

                          So, NixPkgs have a steeper learning curve in comparison to dockerfiles. It might be simpler to just run Docker. What I don’t like is what is happening inside Docker, and how the solution for what looks like simple problems involves running a whole OS.

                          I’m aware that you can have containers without an OS like described in this thread, but that is not something I often see people using in the wild.

                        3. 1

                          Nit-pick: AFAIK one doesn’t really need Alpine or any other distro inside the container. It’s “merely” for convenience. AFAICT it’s entirely possible to e.g. run a Go application in a container without any distro. See e.g. https://www.cloudbees.com/blog/building-minimal-docker-containers-for-go-applications

                    2. 3

                      Let’s assume nix shell is actual magic — like sourcerer level, wave my hand and airplanes become dragons (or vice versa) magic — well this article just demonstrated that immense power by pulling a coin out of a deeply uncomfortable kid’s ear while pulling on her nose.

                      I can’t speak for the previous comment’s author, but those extra details, or indeed any meat on the bones, would definitely help justify this article’s otherwise nonsensical ranking.

                      1. 2

                        Yeah, I agree with your assessment. This article could just as well have the title “MacOS is so fragile, I consider this simple thing to be an issue”. The trouble with demonstrating nix shell’s power is that for all the common cases, you have a variety of ad-hoc solutions. And the truly complex cases appear contrived out of context (see my other comment, which you may or may not consider to be turning airplanes into dragons).

                    3. 19

                      nix is not the first thing most devs would think of when faced with that particular problem, so it’s interesting to see reasons to add it to your toolbox.

                      1. 9

                        Good, as it is not supposed to be the first thing. Learning a fringe system with a new syntax just to do something trivial is not supposed to be the first thing at all.

                      2. 4

                        I find it also baffling that this story has more upvotes than the excellent and original code visualization article currently also very high. Probably some nix up vote ring pushing this

                        1. 12

                          Or folks just like Nix I guess? 🤷

                          1. 11

                            Nix is cool and people like it.

                            1. 5

                              I didn’t think this article was amazing, but I found it more interesting than the code visualization one, which lost me at the first, “From this picture, you can immediately see that X,” and I had to search around the picture for longer than it would have taken me to construct a find command to find the X it was talking about.

                              This article, at least, caused me to say, “Oh, that’s kind of neat, wouldn’t have thought of using that.”

                            2. 6

                              This article is useless. It is way simpler (and the python way) to just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”. No need to write a nix file, and then write a blog post to convince yourself you didn’t waste your time!

                              Considering all nix posts get upvoted regardless of content, it’s about time we have a “nix” tag added to the site.

                              1. 14

                                This article is not useless just because you don’t see its value.

                                I work mainly with Ruby and have to deal with old projects. There are multiple instances where the Ruby way (using a Ruby version manager) did not work because it was unable to install an old Ruby version or gem on my new development machine. Using a nix-shell did the job every time.

                                just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”

                                What do you do if this fails due to some obscure dependency problem?

                                1. 4

                                  What do you do if this fails due to some obscure dependency problem?

                                  Arguably you solve it by pinning dependency versions in the pip install invocation or requirements.txt, as any Python developer not already using Nix would do.

                                  This article is not useless just because you don’t see its value.

                                  No, but it is fairly useless because it doesn’t do anything to establish that value, except to the choir.

                                  1. 2

                                    In my experience there will be a point where your dependencies will fail due to mismatching OpenSSL, glibc versions and so on. No amount of pinning dependencies will protect you against that. The only way out is to update dependencies and the version of your language. But that would just detract from your goal of getting an old project to run or is straight up impossible.

                                    Enter Nix: You pin the entire environment in which your program will run. In addition you don’t pollute your development machine with different versions of libraries.

                                    1. 3

                                      Arguably that’s just shifting the burden of effort based on a value judgement. If your goal is to get an old project to run while emphasizing the value of incurring zero effort in updating it, then obviously Nix is a solution for you and you’ll instead put the effort into pinning its entire runtime environment. If, however, your value to emphasize is getting the project to run then it may well be a more fruitful choice to put the effort into updating the project.

                                      The article doesn’t talk about any of the hairier details you’re speaking to, it just shows someone taking a slightly out of date Python project and not wanting to put any personal effort into updating it… but updating it by writing a (in this case relatively trivial) Python 3 version and making that publicly available to others would arguably be the “better” solution, at least in terms of the value of contributing back to the community whose work you’re using.

                                      But ultimately my argument isn’t with the idea that Nix is a good solution to a specific problem, it’s that this particular article doesn’t really make that point and certainly doesn’t convincingly demonstrate the value of adding another complex bit of tooling to the toolkit. All the points you’ve raised would certainly help make that argument, but they’re not sadly not present in this particular article.

                                  2. 1

                                    Just out of curiosity, I’m also dealing with ancient ruby versions and use nix at work but I couldn’t figure out how to get old enough versions, is there something that helps with that?

                                      1. 1

                                        Thank you, very helpful!

                                        1. 1

                                          Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.

                                          If instead you want an older ruby but linked to newer libraries (eg, OpenSSL) there’s a few extra steps, but this is a great jumping off point to finding derivations to fork.

                                          1. 1

                                            Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.

                                            Plus glibc, OpenSSL and other dependencies with many known vulnerabilities. This is fine for local stuff, but definitely not something you’d want to do for anything that is publicly visible.

                                            Also, note that mixing different nixpkgs versions does not work when an application uses OpenGL, Vulkan, or any GPU-related drivers/libraries. The graphics stack is global state in Nix/NixOS and mixing software with different glibc versions quickly goes awry.

                                      2. 2

                                        This comment mentions having done something similar with older versions by checking out an older version of the nixpkgs repo that had the version of the language that they needed.

                                        1. 2

                                          Like others already said you can just pin nixpkgs. Sometimes there is more work involved. For example this is the current shell.nix for a Ruby on Rails project that wasn’t touched for 5 years. I’m in the process of setting up a reproducible development environment to get development going again. As you can see I have to jump through hoops to get Nokogiri play nicely.

                                          There is also a German blog post with shell.nix examples in case you need inspiration.

                                      3. 4

                                        this example, perhaps. I recently contributed to a python 2 code base and running it locally was very difficult due to c library dependencies. The best I could do at the time was a Dockerfile (which I contributed with my changes) to encapsulate the environment. However, even with the container standpoint, fetching dependencies is still just as nebulous as “just apt install xyz.” Changes to the base image, an ambiently available dependency or simply turning off the distro package manager services for unsupported versions will break the container build. In the nix case, it is sort of forced on the user to spell it out completely what the code needs, combine with flakes and I have a lockfile not only for my python dependencies, but effectively the entire shell environment.

                                        More concretely, at work, the powers to be wanted to deploy python to an old armv7 SoC running on a device. Some of the python code requires c dependencies like openssl, protobuf runtime and other things and it was hard to cross compile this for the target. Yes, for development it works as you describe, you just use a venv, pip install (pipenv, poetry, or whatever as well) and everything is peachy. then comes to deployment:

                                        1. First you need to make a cross-compiled python interpreter, which involves first building the interpreter for your host triple then rebuilding the same source for the target host triple making sure to tell the build process where the host triple build is. This also ignores that some important python interpreter things may not build, like ctypes.
                                        2. Learn every environment variable you need to expose to the setup.py or the n-teenth build / packaging solution for the python project you want to deploy, hope it generates a wheel. We will conveniently ignore how every C depending package may use cmake, or make, or meson, etc, etc…
                                        3. make the wheels available to the image you actually ship.

                                        I was able to crap out a proof-of-concept in a small nix expression that made a shell that ran the python interpreter I wanted with the python dependencies needed on both the host and the target and didn’t even have to think. Nixpkgs even gives you cross compiling capabilities.

                                        1. 1

                                          Your suggested plan is two years out of date, because CPython 2.7 is officially past its end of life and Python 2 packages are generally no longer supported by upstream developers. This is the power of Nix: Old software continues to be available, as if bitrot were extremely delayed.

                                          1. 3

                                            CPython 2.7 is available in debian stable (even testing and sid!), centos and rhel. Even on MacOS it is still the default python, that ships witht he system. I don’t know why you think it is no longer available in any distro other than nix.

                                      1. 4

                                        it seems paste has been forgotten.

                                        $ seq 10 | paste -d' ' - - -
                                        1 2 3
                                        4 5 6
                                        7 8 9
                                        10
                                        
                                        1. 7

                                          So I’m sticking to the official PyPI distribution wherever possible. However, compared to the Debian distribution it feels immature. In my opinion, there should be compiled wheels for all packages available that need it, built and provided by PyPI. Currently, the wheels provided are the ones uploaded by the upstream maintainers. This is not enough, as they usually build wheels only for one platform. Sometimes they don’t upload wheels in the first place, relying on the users to compile during install.

                                          For this and other reasons I prefer to use Nix as my Python toolchain. The packages are up to date, they all come with binaries, and they’re guaranteed to work no matter what distribution I’m running on top of. (Plus I don’t have to worry about the constantly changing “official” packaging workflow…)

                                          1. 1

                                            Adding onto this. I experimented with nix for this use as well. My problems are slightly different than most though. I need to ship python code (with C source dependencies) for an ARM SoC. I made a proof of concept that cross-compiled and ran on the target seamlessly.

                                            It’s a bit convoluted exporting derivations to non-nix systems, however.

                                          1. 15

                                            Hell, there are major applications that can’t even do XDG Base Directories, cough cargo & rustup comes to mind immediately cough and we expect a consensus on something that clearly doesn’t align with the goals of some of these projects?

                                            I respect the hustle. I tried coming up with all sorts of potential solutions to the whole dotfile mess too, but then realized it was a mostly losing war. Best of luck if this guy can really get DO_NOT_TRACK going, but I’d argue by the time I’m opting out in such a fashion, shouldn’t I be OPTING IN in the first place?

                                            1. 19

                                              Multiple of these are just the standard “I don’t understand floating point” nonsense questions :-/

                                              1. 5

                                                That doesn’t explain why a script language uses floating point as its default representation, let alone why that is its only numeric type.

                                                1. 4

                                                  JavaScript has decimal numbers now fwiw, though I agree. Honestly I’ve been convinced that IEEE floating point is just a bad choice as a default floating point representation too. I’d prefer arbitrary size rationals.

                                                  1. 2

                                                    Arbitrary size rationals have pretty terrible properties. A long chain of operations where the numerator and denominator are relatively prime will blow the representation up in size.

                                                2. 5

                                                  Indeed, same goes for the octal notation question (010 - 3 = ?)

                                                  1. 7

                                                    tbh the 010 octal format IS pretty awful. I don’t know what they were thinking putting that in C.

                                                    1. 5

                                                      well at least JS users have 0o10 - 0o5 now, if they find leading 0 octal notation to be confusing.

                                                      1. 3

                                                        Thanks for note, wasn’t aware of the ES2015 notation and MDN is helpful as always.

                                                      2. 4

                                                        I mean if you want fun 08 is valid JS, and that’s absurd :) (it falls back to decimal, nothing could go wrong with those semantics)

                                                        1. 3

                                                          Amusing, I’ve seen people write PHP with leading 0s before. Newer PHP rejects if there are invalid octal digits - fun! Putting the leading zeroes is common for people used to i.e COBOL/RPG and SQL; business programming where they’ve never seen C.

                                                      3. 2

                                                        really? only like ~5 of the 25 appeared to be floating point releated: 0.1 + 0.2, x/0 behavior, 0 === -0 and NaN !== NaN. Correct me if I’m wrong. Most of them seem to be about operators and what type of valueOf/toString behavior one gets when faced with such operators. Only two I got wrong were because I forgot +undefined is NaN and I was a bit surprised that one could use postfix increment on NaN (and apparently undefined?).

                                                        1. 1

                                                          Any arithmetic operation can be performed on NaN, but it always yields another NaN.

                                                          The undefined one is a bit weird but kinda makes sense, it is indeed not a number.

                                                          I actually think what’s weirder is how javascript will sometimes give you basically an integer. x|0 for example. The behavior makes a lot of sense when you know what it is actually doing with floating point, but it is still just a little strange that it even offers these things.

                                                          But again I actually think it is OK. I’m a weirdo in that I don’t hate javascript or even php.

                                                        2. 1

                                                          i don’t see where is the contradiction there. JS numbers are IEEE 64-bit floating point numbers, so any weirdness/gotcha in IEEE floating point is also a weirdness/gotcha in JS too

                                                          i know that many (most) languages also use floating point numbers by default, but that doesn’t floating point gotchas any less weird, maybe just more familiar to already-seasoned programmers :)

                                                        1. 35

                                                          I’m so glad we’ve moved past trying to do all of this with shell scripts.

                                                          1. 11

                                                            This can’t be stated strong enough. I recently had to fix a vendor provided (not from the particular software in question) init script that spawned a daemon that inherited file descriptors it shouldn’t have. There are simply too many ways to just pile on spaghetti that can have strange effects. There were other maintenance tasks that required rewrites of most of these scripts just because critical software changes: going from udevd to eudev.

                                                            I cannot stand init scripts.

                                                            1. 9

                                                              I recently took a look at some software a customer wanted installed on all their Linux servers. It had an undocumented upgrade function that fetched and installed software without any checksums and without verifying HTTPS certificates, the upgrade function was not disabled even when explicitly using the somewhat documented flag to disallow upgrades. It dropped lines in /etc/rc.d/rc.local, it spawned itself during postinst scripts outside of the init system, and trying to upgrade the package resulted in the software deleting itself. The postinst script also looks for a line in /etc/rc.d/rc.local that suggests that they previously modified /etc/ssh/sshd_config on install…

                                                              Best part: that particular piece of software has some huge partners. To name a few: AWS, Capgemini, Microsoft Azure, Google Cloud, Infosys, Tata. Apparently none of the partners ever took a look at the package…

                                                          1. 4

                                                            Since it isn’t clear from the code alone. The main reason I started this project was to familiarize myself with all the warts and weird features that bash has to offer and I will occasionally come back to this code to refresh my memory on how to, say tokenize a string safely by some delimiter. My first job out of college was basically modernizing and fixing up a collection of such kind of shell scripts.

                                                            Hopefully people can glean through this code and learn something by how I write bash scripts.

                                                            1. 2

                                                              There’s a lot of (understandable I guess, but also always a little surprising to me) animus toward Shell, which leads to a lot of (well-intended!) dogma about what can and can’t be a shell script, or what crutches must be used.

                                                              But I feel like it also leads to a kind of self-perpetuating problem. I don’t know exactly how to put it, but like: shell is complicated enough that you need to get off the beaten path and do too much with it to internalize those warts and weird features, but you have to be able to respect it or at least not loathe it to be able to make that investment?

                                                              1. 1

                                                                but you have to be able to respect it or at least not loathe it to be able to make that investment?

                                                                Hmm. I think you merely have to be unaware of the depth of suffering that awaits should you not deviate from the path early enough. Or get caught dealing with the aftermath when that’s happened to others. And then dwell there, down on the fringes of the Turing tar pit, for far too long.

                                                                I don’t write this out of any particular animus - nor have I delved deep enough, often enough, to have accumulated any real mastery. ~25 years after my first shell script I still have to look up conditional syntax and think too hard about quoting and routinely make the class of basic mistakes that shellcheck is a great help in avoiding. But I’ve suffered. I assure you I have suffered.

                                                                I see what you mean, though. Like a lot of dogma, the now-conventional wisdom can be quite misleading, but I think you probably have to have internalized the reasons it became dogma in order to understand why.

                                                            1. 3

                                                              You can skip the coproc if you use a global variable as your mechanism of returning values.

                                                              I usually use $REPLY since bash read uses that variable to store the result if no variable name was given.

                                                              So:

                                                              if ! emoji=$(short-code-emoji "$code_accum" "$cldr_file"); then 
                                                                  printf 'ERROR: Unable to get emoji :%s:\n' "$code_accum" >&2
                                                                  return 1
                                                              fi
                                                              printf '%s' "$emoji"
                                                              parsing_code='false'
                                                              continue
                                                              

                                                              Would become:

                                                              if ! short-code-emoji "$code_accum" "$cldr_file"; then 
                                                                  printf 'ERROR: Unable to get emoji :%s:\n' "$code_accum" >&2
                                                                  return 1
                                                              fi
                                                              printf '%s' "$REPLY"
                                                              parsing_code='false'
                                                              continue
                                                              

                                                              They you could just modify your global hash map of memoizied results.

                                                              1. 2

                                                                I like your use of the REPLY variable to store the return value. This seems like a good approach if you are okay with storing the state in a global.

                                                                1. 3

                                                                  Well you are already storing the state of your coproc’s file descriptors in a global array including its pid. On top of that, you can have only one coproc at a time.

                                                                  I mean if maximum performance and the need to bring back data from child processes is key, the only hope left is to just break out eval as shown in @abathur’s example, though I probably would have the caller run it, not that what was posted was not sound or anything. Bash and globals are life, much like awk and other ancient unixy things.

                                                                  In pure honesty, if maximum performance is the goal here, restructuring this whole snippet to basically be a jq/awk script would be the best way, or even batching the emojis you need to fetch so jq is invoked once.

                                                                  1. 2

                                                                    That is a good point about the coproc file descriptors being global, I wish that wasn’t necessary. I agree that for maximum performance there are better solutions then using a coproc, but I think they present a nice balance of allowing for memoization, encapsulation of logic, and good ergonomics with typical Bash programming which uses command substitutions. Thanks for reminding me about the single coproc limitation, I am going to add a note about that to the blog post. There was also some recent movement around possibly having that limitation removed, https://mail.gnu.org/archive/html/help-bash/2021-03/msg00207.html.

                                                              1. 3

                                                                Spot instances are unstable by design, and can go down any time. However, in practice I have seen very few terminations. My longest uptime has been above 300 days (in region eu-west-1)

                                                                In my experience, at least on us-east-1, I’ve seen multiple spot instances get terminated every day, with “no capacity” errors when trying to submit new spot requests, so YMMV.

                                                                1. 1

                                                                  True, I’ve seen more terminations in other regions/for other instance types. However, even with a daily termination, this would mean ~5 minutes of downtime (as a new instance boots up) which for many personal projects should be sufficient. The key is to specify as many instance types as possible.

                                                                  1. 1

                                                                    Can confirm this as well (at least for us-east-1). We use to use spot instances for builds, and even though the builds were roughly 10-15 minutes, there were cases of intermittent spot instance termination.

                                                                    Easier to just eat the cost of on-demand in my experience. Maybe it is region specific.

                                                                  1. 2

                                                                    One of my first jobs involved maintaining (ultimately replacing) numerous amount of shell scripting and I can say most everything rings true:

                                                                    1. You generally do “outsource” a lot of platform specific stuff to some other utility.
                                                                    2. Modularizing shell scripting is difficult outside of inventing your own concatenation build tool so scripts usually do grow in size.
                                                                    3. Testing? I really don’t most times, and neither did anyone I worked with, outside of copy/paste snippets into a terminal and see what happens.
                                                                    4. Portability is a lie; I think the dkms example demonstrates that clearly with the 10s of different ways to even invoke kernel-install.
                                                                    5. Everyone, and I mean everyone, has their own particular shell scripting style; even in the dkms example linked in the article, I see usages of sed, grep, etc that I probably would have done in an inline awk instead.

                                                                    What really disturbs me about 4 is I don’t have a good answer when people ask how to write shell scripts. Maybe that’s why everyone comes up with their own quirky way. Usually I just say, read man bash or make a project with it. It if wasn’t so embarrassing, maybe I’d point them to my ridiculous IRC bot written in bash.

                                                                    1. 3

                                                                      interesting mentioning how Readable streams are async iterable, but not showing the use of pipeline() in the example given where one would await once(writable, 'drain') probably could be replaced by just yielding in that generator and having a pipeline such that:

                                                                      const p = util.promisify(pipeline);
                                                                      await p([readable, generator_code_here, writable]);
                                                                      

                                                                      EDIT: unrelated but interesting related to iterators: but hopefully soon we can have https://github.com/tc39/proposal-iterator-helpers as well.

                                                                      1. 3

                                                                        Thanks, this is a great point. I actually prefer the pipeline function most of the time when dealing with “piping” streams.

                                                                        I was not aware that you could use a generator in a pipeline through. I only knew you could create “Readable” streams from generators using stream.Readable.from (https://nodejs.org/api/stream.html#stream_stream_readable_from_iterable_options). I must try this one out. It might be a great addition to the post!

                                                                      1. 3

                                                                        I implemented a build-log parsing web app that would show which parts of the build failed using awk cgi. I’ll be really honest, I don’t recommend it. AWK is great if it is all you have, but compared to something like, python, perl, javascript, etc, it isn’t very pretty to build big parsers in, that correctly emit json or html. I can recommend CGI as a whole though, it’s great for quickly adding a hacky api to any service, especially if it is internal or has low volume. A lot of the stuff at my work is implemented like this since the engineering org is maybe ~25 people.

                                                                        Also, why the chroot? hopefully this behavior can be turned off. trying to copy over the interpreter and hoping ldd catches all the linked components seems troublesome for no real benefit. what would the scripts even usefully do in such a limited context?

                                                                        1. 5

                                                                          httpd with chroot is the default on OpenBSD. The philosophy is: why give an attacker able to compromise your web-app access to your entire system?

                                                                        1. 1

                                                                          So I thought more about this question, and I was curious what you are having trouble with in terms of scripting? like executing random programs like bash? I honestly think Node has made it much easier recently with the introduction of async generator based streams, assuming you want to just execute stuff and process their stdout like a pipeline.

                                                                          I hacked some example code up to sort of show what I mean. verbose, but I can’t imagine you can’t turn this into a library once it is all set up:

                                                                          #!/usr/bin/env node
                                                                          const { spawn } = require('child_process');
                                                                          const { promisify } = require('util');
                                                                          const pipeline = promisify(require('stream').pipeline);
                                                                          
                                                                          const Truncated = Symbol('truncated');
                                                                          
                                                                          function* lineize(input) {
                                                                              let start = 0;
                                                                              let nextLine = input.indexOf('\n');
                                                                              while (nextLine !== -1) {
                                                                                  yield input.slice(start, nextLine);
                                                                                  start = nextLine + 1;
                                                                                  nextLine = input.indexOf('\n', start);
                                                                              }
                                                                              if (start < input.length) {
                                                                                  yield input.slice(start);
                                                                                  yield Truncated;
                                                                              }
                                                                          }
                                                                          
                                                                          async function main() {
                                                                              const ls = spawn('ls', ['-1'], { stdio: [ 'ignore', 'pipe', 'ignore' ]});
                                                                              const p = pipeline([
                                                                                  ls.stdout,
                                                                                  async function* (readable) {
                                                                                      let lbuf = '';
                                                                                      let lineo = 0;
                                                                                      for await (const data of readable) {
                                                                                          const dataStr = lbuf + data.toString('utf-8');
                                                                                          for (const line of lineize(dataStr)) {
                                                                                              if (line === Truncated) lbuf = line;
                                                                                              else console.log(`${++lineo}: ${line}`);
                                                                                          }
                                                                                      }
                                                                                  },
                                                                              ]);
                                                                              await p;
                                                                          }
                                                                          main().then(() => console.log('done'))
                                                                                    .catch(e => console.error(e));
                                                                          
                                                                          1. 13

                                                                            Yes, all the time! #!/usr/bin/env node fits nicely at the top of a js file, and process args or stdin/stdout streams work great.

                                                                            Lots of the time I use it for data wrangling and json ETL jobs that I’m playing around with. Sometimes they graduate into being actual apps instead of throw away scripts.

                                                                            My latest was I wrote a page listener to book myself a vaccine appointment:

                                                                            #!/usr/bin/env node
                                                                            var page = "https://www.monroecounty.gov/health-covid19-vaccine";
                                                                            var selector = "#block-system-main";
                                                                            var seconds = 15;
                                                                            var warning = "It's a ready"
                                                                            
                                                                            var child_process = require("child_process");
                                                                            var cheerio = require("cheerio");
                                                                            var fetchUrl = require("fetch").fetchUrl;
                                                                            
                                                                            var last = null;
                                                                            
                                                                            var check = function() {
                                                                                fetchUrl(page, function(error, meta, body){
                                                                                    var $ = cheerio.load(body.toString());
                                                                                    var curr = $(selector).html();
                                                                                    if (last!=curr) {
                                                                                        //Page has changed!  Do something!
                                                                                        var command = `say "${warning}"`;
                                                                                        console.log("Changed!",new Date());
                                                                                        child_process.exec(command);
                                                                            
                                                                                    } else {
                                                                                        console.log("No change",new Date());
                                                                                    }
                                                                                    last=curr;
                                                                                    setTimeout(check,seconds*1000);
                                                                                });
                                                                            }
                                                                            
                                                                            check();
                                                                            
                                                                            1. 5

                                                                              At some point in time I was favoring Python over bash for scripts. But the gain in expressivity was counter-balanced by the strength of bash for piping stuffs together, and the shortness/simplicity for doing basic file stuffs, and the whole vocabulary of shell executables. In the end, I gained a lot in embracing bash.

                                                                              And when things go more complex I slowly change language: Bash -> Perl -> Python -> Compiled Language.

                                                                              How would you sell NodeJs for Linux scripting to others like me. What are the major pros ? Is your main point (which would be totally understandable and acceptable) that you are fluent in Js and want to use this over harder to read bash.

                                                                              If your convincing I’d gladly give it a try to NodeJs linux scripting. Maybe there are field where it shines. Like retrieving json over the internet and parsing it far easily than other language for example.

                                                                              1. 7

                                                                                Well it’s mostly preference and I’m a HUGE advocate of “use what you like and ignore the zealots”. It’s so hard to espouse pros and cons in a general sense for everyone, because everyone is doing different stuff in different styles.
                                                                                That being said, here’s why I personally like to use it as a shell scripting language:

                                                                                • I write python too, and use bash, and can sling perl if I have too, but js feels like the best balance between being concise and powerful (for my needs)
                                                                                • node is everywhere now (or really easy to get and I know how to get it in seconds on any OS).
                                                                                • Say what you will about node_modules, but node coupled with npm or yarn IMO set the standard in ease of using and writing packages. I also write python and writing a package for pip install is way way way more annoying compared to npm.
                                                                                • package.json is a single file that can do everything, and I loathe config split across files in other langs for tiny tiny scripts that do one thing only.
                                                                                • JSON is literally javascript, so messing with JSON in javascript is natural. Here’s how to pretty print a json file: process.stdout.write(JSON.stringify(JSON.parse(require('fs').readFileSync('myfile.json')),null,2))
                                                                                • Everyone complains about node_modules size but this ain’t webpack - it’s just a module or two. I’m a big fan of cheerio and lines. node-fetch is also quite small and very powerful

                                                                                Probably more reasons but that’s good enough :)

                                                                              2. 2

                                                                                Hello const and modules.

                                                                                1. 2

                                                                                  nah ES5 for life!

                                                                                2. 1

                                                                                  #!/usr/bin/env node fits nicely at the top of a js file

                                                                                  At which point it’s no longer a valid Javascript file? But I suppose nodejs allows it?

                                                                                  1. 4

                                                                                    https://github.com/tc39/proposal-hashbang

                                                                                    Stage 3 proposal. every engine I know of supports it.

                                                                                    EDIT: hell I just checked, even GNOME’s gjs supports it.

                                                                                    1. 2

                                                                                      Yes node allows it and I like this because it makes it clear that its meant to be a command line script and not a module to include, and allows for the familiar

                                                                                      $>./script.js

                                                                                      versus needing to invoke node explicitly:

                                                                                      $>node script.js

                                                                                  1. 2

                                                                                    For throwaway code, I almost always use python or perl, where throwaway = “quick hack to solve some problem.” For anything mildly complex I almost always use nodejs instead; I have a few reasons for this:

                                                                                    1. I like javascript’s more functional focus (vs other popular alternative scripting languages).

                                                                                    2. node_modules is a feature, no matter how hard people imply the opposite.

                                                                                      a. ^– because of the above, I can go so far as use the rich ecosystem of javascript parsers, written in javascript, to bundle my entire tool into one javascript source file.

                                                                                      b. Also, people can just git clone software, much to their chagrin, and run npm install and it magically just works. Could you do it with python? probably, after you worry about what shell environment you’re executing in or what poetry/pipenv/venv/whatever-tool-of-the-week you use to manage said environment.

                                                                                    3. I can share the exact same code I use in a front ended web application into the backend. I do this often and thus avoid having a lot of DRY violations around constants.

                                                                                    4. Most of the slightly-more-than-quick-fixes are usually webapps in my experience as well.

                                                                                    1. 3

                                                                                      At what point do you end up just making specific views that effectively are your limited API? I can imagine someone who has full read could make some obnoxious DoS queries. The whole point to an API, at least in my opinion, is you know relatively well what kind of things are expected to come in and go out. Obviously like all turing machines, all sorts of bad can happen, but I can’t imagine a function that takes a user ID and returns specific data is less predictable than exposing some user table with a full blown query language (read-only even).

                                                                                      1. 1

                                                                                        I don’t think the article is suggesting your user-facing API should just be ‘give us a SQL query and we’ll execute it’, but more that building an additional read layer on top of your database that your user-facing API has to go through is unnecessary.

                                                                                      1. 7

                                                                                        Except where I tell systemd-resolved to use my office’s DNS server and it doesn’t, or I tell it to not use the office’s DNS server and it does, and steadfastly refuses to change. Then I go through the whole previous flowchart trying to figure out how to actually tell it to change its behavior, because doing what it tells me to alter what servers it queries silently fails.

                                                                                        So really it’s just https://xkcd.com/927/

                                                                                        1. 4

                                                                                          That’s odd. What version of systemd-resolved do you have?

                                                                                          1. 3

                                                                                            The one included in Ubuntu 18.04: systemd 237 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid

                                                                                            1. 2

                                                                                              In my experience the the one in Ubuntu 20.04 does work for this: systemd 245 (245.4-4ubuntu3.6) +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid

                                                                                              It does have an out-of-tree patch applied that produces some logspam on NXDOMAIN though.

                                                                                              1. 1

                                                                                                Someday my office will upgrade to 20.04, I look forward to giving this a good hard try. Thanks.

                                                                                              2. 2

                                                                                                Apparently the version in 20.04 is a lot better. Let’s hope you can upgrade soon.

                                                                                            2. 2

                                                                                              The dot diagrams seem to imply they thought and incorporated the other use-cases and “standards” so that xkcd is hardly relevant.

                                                                                            1. 1

                                                                                              Really like this breakdown. usually when I encounter this need, I just use sqlite3. FTS5 generally just works. could probably make a full-text search engine in less Python punting everything to that.