1. 41
    1. 9

      Want to find the magical ffmpeg command that you used to transcode a video file two months ago?

      Just dig through your command history with Ctrl-R. Same key, more useful.

      (To be fair, you can do this in bash with history | grep ffmpeg, but it’s far fewer keystrokes in Elvish :)

      Sorry, what? Bash has this by default as well (At least in Ubuntu, and every other Linux distribution I’ve used). ^r gives autocomplete on history by the last matching command.

      1. 10

        I hoped I had made it clear by saying “same key”. The use case is that you might have typed several ffmpeg commands, and with bash’s one-item-at-a-time ^R it is really hard to spot the interesting one. Maybe I should make this point clearer.

        1. 6

          That’s handy, but it is easy to add this to bash and zsh with fzf:

          https://github.com/junegunn/fzf#key-bindings-for-command-line

          With home-manager and nix, enabling this functionality is just a one-liner:

          https://github.com/danieldk/nix-home/blob/f6da4d02686224b3008489a743fbd558db689aef/cfg/fzf.nix#L6

          I like this approach, because it follows the Unix approach of using small orthogonal utilities. If something better than fzf comes out, I can replace it without replacing my shell.

          Structured data in pipelines seems very nice though!

          1. 1

            What exactly does programs.fzf.enableBashIntegration do? I just enabled it, and it seems to have made no difference.

            1. 2

              https://github.com/rycee/home-manager/blob/05c93ff3ae13f1a2d90a279a890534cda7dc8ad6/modules/programs/fzf.nix#L124

              So, it should add fzf keybindings and completions. Do you also have programs.bash.enabled set to true so that home-manager gets to manage your bash configuration?

              1. 1

                programs.bash.enabled

                Ah, enabling that did the trick (no need to set initExtra). Thanks!

                I did however have to get rid of my existing bashrc/profile. Looks like I need to port that over to home-manager …

                1. 2

                  Yeah, been there, done that. In the end it’s much nicer. Now when I install a new machine, I have everything set up with a single ‘home-manager switch’ :).

      2. 4

        I’ve always found bash’s ctrl+r to be hard to use properly, in comparison elvish’s history (and location) matching is like a mini-fzf, it’s very pleasant to use.

      3. 1

        I think the idea here is that it shows you more than one line of the list at once, while C-r is sometimes a bit fiddly to get to exactly the right command if there are multiple matches.

      4. 1

        For zsh try «bindkey '^R' history-incremental-pattern-search-backward» in .zshrc. Now you can type e.g. «^Rpy*http» to find «python -m http.server 1234» in your history. Stil shows only one match, but it’s easier to find the right one.

      5. 1

        I use https://github.com/dvorka/hstr for history search on steroids and I am very happy with it.

    2. 8

      So much negativity. I’m trying it now, and I find it quite nice. Will give it some time and see if it manages to lure me away from fish. Great job!

    3. 4

      A Turin turambar turún’ ambartanen. Another shell that isn’t shell, shells that aren’t shells aren’t worth using because shell’s value is it’s ubiquity. Still, interesting ideas.

      This brought to you with no small apology to Tolkien.

      1. 13

        I’ve used the Fish shell daily for 3-4 years and find it very much worth using, even though it isn’t POSIX compatible. I think there’s great value in alternative shells, even if you’re limited in copy/pasting shell snippets.

        1. 12

          So it really depends on the nature of your work. If you’re an individual contributor, NEVER have to do devops type work or actually operate a production service, you can absolutely roll this way and enjoy your highly customized awesomely powerful alternative shell experience.

          However, if you’re like me, and work in environments where being able to execute standardized runbooks is absolutely critical to getting the job done, running anything but bash is buying yourself a fairly steady diet of thankless, grinding, and ultimately pointless pain.

          I’ve thought about running an alternative shell at home on my systems that are totally unconnected with work, but the cognitive dissonance of using anything other than bash keeps me from going that way even though I’d love to be using Xonsh by the amazing Anthony Scopatz :)

          1. 5

            I’d definitely say so – I’d probably use something else if I were an IC – and ICs should! ICs should be in the habit of trying lots of things, even stuff they don’t necessarily like.

            I’m a big proponent of Design for Manufacturing, an idea I borrow from the widgety world of making actual things. The idea, as defined by an MFE I know, is that one should build things such that: “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.”

            For a delivery-ops guy like me, working in a tightly regulated, safety-critical world of Healthcare, having reproducible, reliable architecture, that’s cheap to replace and repair is critical. Adding a new shell doesn’t move in that needle towards reproducibility, so it’s value has to come from reliability or cheapness, and once you add the fact that most architectures are not totally homogeneous, the cost goes up even more.

            That’s the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.

            1. 2

              “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.” “That’s the hill new shells have to climb,”

              Or, like with the similar problem posed by C compilers, they just provide a method to extract to whatever the legacy shell is for widespread, standard usage.

              EDIT: Just read comment by @ac which suggested same thing. He beat me to it. :)

              1. 2

                I’ve pondered about transpilers a bit before, for me personally, I’ve learned enough shell that it doesn’t really provide much benefit, but I like that idea a lot more then a distinct, non-compatible shell.

                I very much prefer a two-way transpiler. Let me make my old code into new code, so I can run the new code everywhere and convert my existing stuff to the new thing, and let me go back to old code for the machines where I can’t afford to figure out how to get new thing working. That’s a really big ask though.

                The way we solve this at $work is basically by writing lots of very small amounts of shell, orchestrated by another tool (ansible and Ansible Tower, in our case). This covers about 90% of the infrastructure, with the remaining bits being so old and crufty (and so resource-poor from an organization perspective) that bugs are often tolerated rather than fixed.

          2. 4

            The counter to alternative shells sounds more like a reason to develop and use alternative shells that coexist with a standard shell. Maybe even with some state synchronized so your playbooks don’t cause effects the preferred shell can’t see and vice versa. I think a shell like newlisp supporting a powerful language with metaprogramming sounds way better than bash. Likewise, one that supports automated checking that it’s working correctly in isolation and/or how it uses the environment. Also maybe something on isolation for security, high availability, or extraction to C for optimization.

            There’s lots of possibilities. Needing to use stuff in a standard shell shouldn’t stop them. So, they should replace the standard shell somehow in a way that still lets it be used. I’m a GUI guy whose been away from shell scripting for a long time. So, I can’t say if people can do this easily, already are, or whatever. I’m sure experts here can weigh in on that.

        2. 7

          I work primarily in devops/application architecture – having alternative shells is just a big ol’ no – tbh I’m trying to ween myself off bash 4 and onto pure sh because I have to deal with some pretty old machines for some of our legacy products. Alternative shells are cool, but don’t scale well. They also present increased attack surface for potential hackers to privesc through.

          I’m also an odd case, I think shell is a pretty okay language, wart-y, sure, but not as bad as people make it out to be. It’s nice having a tool that I can rely on being everywhere.

          1. 14

            I work primarily in devops/application architecture

            Alternative shells are cool, but don’t scale well.

            Non-ubiquitous shells are a little harder to scale, but the cost should be controllable. It depends on what kind of devops you are doing:

            • If you are dealing with a limited number of machines (machines that you probably pick names yourself), you can simply install Elvish on each of those machines. The website offers static binaries ready to download, and Elvish is packaged in a lot of Linux distributions. It is going to be a very small part of the process of provisioning a new machine.

            • If you are managing some kind of cluster, then you should already be doing most devops work via some kind of cluster management system (e.g. Kubernetes), instead of ssh’ing directly into the cluster nodes. Most of your job involves calling into some API of the cluster manager, from your local workstation. In this case, the number of Elvish instances you need to install is one: that on your workstation.

            • If you are running some script in a cluster, then again, your cluster management system should already have a way of pulling in external dependencies - for instance, a Python installation to run Python apps. Elvish has static binaries, which is the easiest kind of external dependency to deal with.

            Of course, these are ideal scenarios - maybe you are managing a cluster but it is painful to teach whatever cluster management system to pull in just a single static binary, or you are managing some old machines with an obscure CPU architecture that Elvish doesn’t even cross-compile to. However, those difficulties are by no means absolute, and when the benefit of using Elvish (or any other alternative shell) far outweighs the overheads, large-scale adoption is possible.

            Remember that bash – or every shell other than the original bourne shell - also started out as an “alternative shell” and it still hasn’t reached 100% adoption, but that doesn’t prevent people from using it on their workstation, servers, or whatever computer they work with.

            1. 4

              All good points. I operate on a couple different architectures at various scales (all relatively small, Xe3 or so). Most of the shell I write is traditional, POSIX-only bourne shell, and that’s simply because it’s everywhere without any issue. I could certainly install fish or whatever, or even standardized version of bash, but it’s an added dependency that only provides moderate convenience at the cost of another ansible script to maintain, and increased attack surface.

              The other issue is that ~1000 servers or so have very little in common with each other, about 300 of them support one application, that’s the biggest chunk, 4 environments of ~75 machines each, all more or less identical.

              The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy. These are all legacy applications, none of them get any money for new work, they’re all total maintenance mode, any time I spend on them is basically time lost from the business perspective. I definitely don’t want to knock alternative shells as a tool for an individual contributor, but it’s ultimately a much simpler problem for me to say, “I’m just going to write sh” then “I’m going to install elvish across a gagillion arches and hope I don’t break anything”

              We drive most cross-cutting work with ansible (that Xe3 is all vms, basically – not quite all, but like 98%), bash really comes in as a tool for debugging more than managing/maintaining. If there is an issue across the infra – say like meltdown/spectre, and I want to see what hosts are vulnerable, it’s really fast for me (and I have to emphasize – for me – I’ve been writing shell for a lot of years, so that tweaks things a lot) to whip up a shell script that’ll send a ping to Prometheus with a 1 or 0 as to whether it’s vulnerable, deploy that across the infra with ansible and set a cronjob to run it. If I wanted to do that with elvish or w/e, I’d need to get that installed on that heterogenous architecture, most of which my boss looks at as ‘why isn’t Joe working on something that makes us money.’

              I definitely wouldn’t mind a better sh becoming the norm, and I don’t want to knock elvish, but from my perspective, that ship has sailed till it ports, sh is ubiquitous, bash is functionally ubiquitous, trying to get other stuff working is just a time sink. In 10 years, if elvish or fish or whatever is the most common thing, I’ll probably use that.

              1. 1

                The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy.

                So, essentially, whatever alternative is built needs to use cross-platform design or techniques to run on about anything. Maybe using cross-platform libraries that facilitate that. That or extraction in my other comment should address this problem, eh?

                Far as debugging, alternative shells would bring both a cost and potential benefits. The cost is unfamiliarity might make you less productive since it doesn’t leverage your long experience with existing shell. The potential benefits are features that make debugging a lot easier. They could even outweigh cost depending on how much time they save you. Learning cost might also be minimized if the new shell is based on a language you already know. Maybe actually uses it or a subset of it that’s still better than bash.

          2. 6

            My only real beef with bash is its array syntax. Other than that, it’s pretty amazing actually, especially as compared with pre bash Bourne Shells.

          3. 4

            Would you use a better language that compiles to sh?

            1. 1

              Eh, maybe? Depends on your definition of ‘better.’ I don’t think bash or pure sh are all that bad, but I’ve also been using them for a very long time as a daily driver (I write more shell scripts then virtually anything else, ansible is maybe a close second); so I’m definitely not the target audience.

              I could see if I wanted to do a bunch of math, I might need use something else, but if I’m going to use something else, I’m probably jumping to a whole other language. Shell is in a weird place, if the complexity is high enough to need a transpiler, it’s probably high enough to warrant writing something and installing dependencies.

              I could see a transpiler being interesting for raising that ceiling, but I don’t know how much value it’d bring.

      2. 10

        Could not disagree more. POSIX shell is unpleasant to work with and crufty; my shell scripting went through the roof when I realized that: nearly every script I write is designed to be launched by myself; shebangs are a thing; therefore, the specific language that an executable file is written in is very, very often immaterial. I write all my shell scripts in es and I use them everywhere. Almost nothing in my system cares because they’re executable files with the path to their interpreter baked in.

        I am really pleased to see alternative non-POSIX shells popping up. In my experience and I suspect the experience of many, the bulk of the sort of scripting that can make someone’s everyday usage smoother need not look anything like bash.

        1. 5

          Truth; limiting yourself to POSIX sh is a sure way to write terribly verbose and slow scripts. I’d rather put everything into a “POSIX awk” that generates shell code for eval when necessary than ever be forced to write semi-complex pure sh scripts.

          bash is a godsend for so many reasons, one of the biggest being process substitution feature.

          1. 1

            For my part, I agree – I try to generally write “Mostly sh compatible bash” – defaulting to sh-compatible stuff until performance or maintainability warrant using the other thing. Most of the time this works.

            The other mitigation is that I write lots of very small scripts and really push the worse-is-better / lots of small tools approach. Lots of the scripting pain can be mitigated by progressively combining small scripts that abstract over all the details and just do a simple, logical thing.

            One of the other things we do to mitigate the slowness problem is to design for asynchrony – almost all of the scripts I write are not time-sensitive and run as crons or ats or whatever. We kick ‘em out to the servers and wait the X hours/days/whatever for them to all phone home w/ data about what they did, work on other stuff in the meantime. It really makes it more comfortable to be sh compatible if you can just build things in a way such that you don’t care if it takes a long time.

            All that said, most of my job has been “How do we get rid of the pile of ancient servers over there and get our assess to a disposable infrastructure?” Where I can just expect bash 4+ to be available and not have to worry about sh compatibility.

        2. 1

          A fair cop, I work on a pretty heterogenous group of machines, /bin/sh works consistently on all of them, AIX, IRIX, BSD, Linux, all basically the same.

          Despite our (perfectly reasonable) disagreement, I am also generally happy to see new shells pop up. I think they have a nearly impossible task of ousting sh and bash, but it’s still nice to see people playing in my backyard.

      3. 6

        I don’t think you can disqualify a shell just because it’s not POSIX (or “the same”, or whatever your definition of “shell” is). The shell is a tool, and like all tools, its value depends on the nature of your work and how you decide to use it.

        I’ve been using Elvish for more than a year now. I don’t directly manage large numbers of systems by logging into them, but I do interact quite a bit with services through their APIs. Elvish’s native support for complex data structures, and the built-in ability to convert to/from JSON, makes it extremely easy to interact with them, and has allowed me to build very powerful toolkits for doing my work. Having a proper programming language in the shell is very handy for me.

        Also, Elvish’s interactive experience is very customizable and friendly. Not much that you cannot do with bash or zsh, but much cleaner/easier to set up.

        1. 4

          I’ve replied a bunch elsewhere, I don’t mean to necessarily disqualify the work – it definitely looks interesting, and for an individual contributor somewhere who doesn’t have to manage tools at scale, or interact with tools that don’t speak the JSON-y api it offers, etc – that’s where it starts to get tricky.

          I said elsewhere in thread, “That’s [the ubiquity of sh-alikes] the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.”

          I’d be much more interested if elvish was a superset of sh or bash. I think that part of the reason bash managed to work was that sh was embedded underneath, it was a drop-in replacement. If you’re a guy who, like me, uses a lot of shell to interact with systems, adding new features to that set is valuable, removing old ones is devastating. I’m really disqualifying (as much as I am) on that ground, not just that it’s not POSIX, but that it is less-than-POSIX with the same functionality. That keeps it out of my realm.

          Now this may be biased, but I think I’m the target audience in terms of adoption – you convince a guy like me that your shell is worth it, and I’m going to go drop it on my big pile of servers where-ever I’m working. Convincing ICs who deal with their one machine gets you enough adoption to be a curiousity, convince a DevOps/Delivery guy and you get shoved out to every new machine I make and suddenly you’ve got a lot of footprint that someone is going to have to deal with long after I’m gone and onto Johhny Appleshelling the thing at whatever poor schmuck hires me next.

          Here’s what I’d really like to see, a shell that offers some of these JSON features as an alternative pipe (maybe ||| is the operator, IDK), adds some better numbercrunching support, and maybe some OO features. All while remaining a superset of POSIX. That’d make the cost of using it very low, which would make it easy to justify adding to my VM building scripts. It’d make the value very high (not having to dip out to another tool to do some basic math would be fucking sweet,), having OO features so I could operate on real ‘shell objects’ and JSON to do easier IO would be really nice as well. Ultimately though you’re fighting uphill against a lot of adoption and a lot of known solutions to these problems (there are patterns for writing shell to be OOish, there’s awk for output processing, these are things which are unpleasant to learn, but once you do, the problem JSON solves drops to a pretty low priority).

          I’m really not trying to dismiss the work. Fixing POSIX shell is good work, it’s just not likely to be successful by replacing. Improving (like bash did) is a much better route, IMO.

      4. 2

        I’d say you’re half right. You’ll always need to use sh, or maybe bash, they’re unlikely to disappear anytime soon. However, why limit yourself to just sh when you’re working on your local machine? You could even take it a step further and ask why are you using curl locally when you could use something like HTTPie instead? Or any of the other “alternative tools” that make things easier, but are hard to justify installing everywhere. Just because a tool is ubiquitous does not mean it’s actually good, it just means that it’s good enough.

        I personally enjoy using Elvish on my local machines, it makes me faster and more efficient to get things done. When I have to log into a remote system though I’m forced to use Bash, it’s fine and totally functional, but there’s a lot of stupid things that I hate. For the most ridiculous and trivial example, bash doesn’t actually save it’s history until the user logs out, unlike Elvish (or even IPython) which saves it after each input. While it’s a really minor thing, it’s really, really, really useful when you’re testing low level hardware things that might force an unexpected reboot or power cycle on a server.

        I can’t fault you if you want to stay POSIX, that’s a personal choice, but I don’t think it’s fair to write off something new just because there’s something old that works. With that mindset we’d still be smashing two rocks together and painting on cave walls.

    4. 1

      Looks great! Thank you for sharing! :)

    5. 1

      Impressed. So a few questions. How did you come up with the name ? Also I think text is simple, did you run into a lot of areas were you thought, hey wouldn’t it be nice if I could pipe defined objects? Do you think this targets “power users” the same way previous unix shells do?

      1. 3

        I am glad you asked all these questions! Let me answer the easier two questions first:

        • The name comes from Roguelike games, where elven items are reputed for their high quality. You can read about the name here.

        • Elvish definitely targets power users. In fact, it aims to unleash even more power than traditional Unix shells - there are a lot more interesting things you can do with a powerful language, and a API for the line editor that takes advantage of advanced language features.

        Onto the hardest question about pipes. Interestingly enough, the need to pipe objects actually arose from nothing more complex than trying to process text data - but not just text, but a table of text. Such needs are surprisingly common: the output of ls -l and ps are all such tables.

        Now traditionally, to process tables, you assume a certain structure: each line represents a row in the table, and each whitespace-delimited field represents a column in a row. If you only care about entire rows, you can just use line-oriented commands like grep and sed; if you care about the columns, you have commands like cut and awk.

        This traditional Unix solution is famed for its simplicity. But it only works under two pretty strict assumptions: a) your rows never contains newlines, and b) your columns never contain whitespaces. You can go pretty far with assumption a), but assumption b), not really. Let’s just see the output of ls -l and ps:

        • Filenames containing spaces are not that uncommon, so the fields in the output of ls -l can embed spaces. Some people think that those filenames are the problem, but I strongly disagree: the only characters disallowed in filenames in Unix are / and \0, and if your tool cannot handle a valid filename, it’s the tool that is broken.

        • The output of ps contains the command line used to start each process, and they also very frequently contain spaces.

        There are ways to solve it, of course. For instance, by quoting the fields containing whitespaces. However, at this point, you can no longer do a simple string split to determine the structure of your table, and that’s all awk and cut are trained to do. The simplicity is already lost. Everything should be made as simple as possible, but no simpler.

        Now, let’s take a step back and assume that we do have versions of cut and awk that understands quoted fields. Problem solved, right? No. This is still your typical awk program:

        { print "$5,$6"; count[$2]++ }
        

        What are $5, $6, and $2? The answer is that, they are the 5th, 6th and 2nd fields of the input. It doesn’t tell you what it actually is - they could be some kind of filenames, usernames, PIDs, permission bits, anything. Now imagine that your program is full of those. It gets messy very fast. Worse, some developers might make changes to the output format. Now all your awk programs are broken.

        The antidote to the problem is named fields. Imagine each field advertises its own name, like “pid”, “filename”, “username”. Your awk programs suddenly look like this:

        { print "$pid,$username"; count[$filename]++ }
        

        Isn’t that much easier to read?

        Now let’s take a step back. What have we done? We have reinvented two things - lists and maps. :)

        I hope I have convinced you the necessity of passing objects in pipes - what I call “value pipes”. Still, there are multiple ways to implement it. You can still use the traditional byte-oriented pipe as transport, and encode all your data structures. After all, Tcl gets away with everything is a string, and so can everyone else. Elvish doesn’t use this approach, but instead it passes those objects directly in a Go channel. This limits the value pipes to be in-process, of course, but you can always do explicit serialization and deserialization. For instance, in Elvish the put command writes a value to the value pipe (think of it as “echo, but just for value pipe”). Doing this won’t work:

        put [a list] [&a=map] | some-external
        

        However, you can simply add an additional serialization step that converts the values in value pipe to JSON:

        put [a list] [&a=map] | to-json | some-external
        

        The deserialization command is, unsurprisingly, from-json. In fact, the first demo on the homepage shows how to deserialize the JSON obtained from a curl call.

        I hope that answers your questions! I’ve probably written too much in this thread :)

    6. -1

      It appears like every other Unix shell, it is next to impossible to pipe stdout and stderr independently of each other.

      1. 9

        ?

        bash:

        $ fn() { echo stdout; echo stderr >&2; }
        $ fn 2> >(tr a-z A-Z >&2) | sed 's/$/ was this/'
        STDERR
        stdout was this
        

        Perhaps one could argue the syntax is somewhat cumbersome, but far from impossible…

        1. 3

          dash / POSIX sh:

          $ fn() { printf 'stdout\n'; printf 'stderr\n' >&2; }        
          $ fn
          stdout
          stderr
          $ fn 2>/dev/null
          stdout
          $ fn >/dev/null
          stderr
          $ (fn 3>&1 1>&2 2>&3) | tr a-z A-Z  
          stdout
          STDERR
          $ ( ((fn 3>&1 1>&2 2>&3) | tr a-z A-Z) 3>&1 1>&2 2>&3 ) | sed -e 's/std//'
          STDERR
          out
          
          1. 1

            Yes, but I never understood the whole “shuffle file descriptors” thing in sh. I mean, why can’t I do:

            $ make |{ tr a-z A-Z > stdoutfile } 2| more

            What does “3>&1 1>&2 2>&3” even mean? That last example I can’t even make sense of.

            Then again, I don’t manage a fleet of machines—I’m primarily a developer (Unix is my IDE) and really, my only wish is a simple way to pipe stderr to a program like more. And maybe a sane syntax for looping (as I can never remember if it’s end or done and I get it wrong half the time).

            1. 1

              Think of it as variable assignment. Descriptor3 = Descriptor1; Descriptor2 = ..., so it’s just a three way swap of stderr and stdout.

              If you want to be strict about it, the second to last example is incomplete as “stdout” was printed on stderr and “STDERR” was printed on stdout. In the last example the swap is reversed, so that I can run sed on the “real” stdout.

              If you wonder why the order of the two output lines did change: it was never guaranteed to be in any order.

              1. 1

                Why? It seems pointless. And that still doesn’t do what I would like to do—pipe stdout and stderr to separate programs per my made-up example.

        2. [Comment removed by author]

      2. 5

        Not only it is possible, but it’s also possible to send/receive data on multiple, arbitrary file descriptors, unlike with POSIX shell (dunno about bash). For example:

        pout = (pipe)
        perr = (pipe)
        run-parallel {
          some_command > $pout 2> $perr
          pwclose $pout
          pwclose $perr
        } {
          cat < $pout >&2
          prclose $pout
        } {
          cat < $perr
          prclose $perr
        }
        
        1. 3

          Just to complement what @nomto said, note that in Elvish this can be easily encapsulated in a function (see https://github.com/zzamboni/elvish-modules/blob/master/util.org#parallel-redirection-of-stdoutstderr-to-different-commands-1), so you can then do something like:

          > pipesplit { echo stdout-test; echo stderr-test >&2 } { echo STDOUT: (cat) } { echo STDERR: (cat) }
          STDOUT: stdout-test
          STDERR: stderr-test
          
        2. 1

          Bash can sorta do it. They still need to be backed by “real” “files”, so you’d have to do mkfifo to get close to what your example is.

          1. 2

            That’s rather clunky, having to create a fifo means that you may leak an implementation detail of a script. I was stumped by this when I wanted to use gpg --passphrase-fd to encrypt data from STDIN: having to go through a fifo a security risk in that case.