1. 2

    i3wm user here. If I understand correctly, you basically want to hide windows that you don’t need at the moment. I don’t know any Xmonad-specific way, but hopefully it’s similar enough to i3.

    • I have an absurd amount of workspaces. 30 on the primary and 10 on the secondary monitor, so 40 in total. Workspaces behind the shortcuts that are hardest for me to hit are typically used for these long running jobs that don’t need frequent checking.

    What currently troubles me the most is that sometimes it’s a little too hard to navigate through these workspaces, and I’m kind of looking for some “superworkspace” mechanism (inspired by KDE Activities) that would allow me to keep everything organized.

    • Sometimes, I switch to tabs. Tabs in i3 are basically what a taskbar is in traditional desktops, except you may run into problems when navigating “directionally” to other windows or monitors. You see, hitting something like $mod+left just moves focus to the left. Coincidentally, in tabbed container it switches to the left tab. When I’m on the right tab and want to go to the left (window, container), but without first switching to the left tab, I’ve got no way to do that. I would need to disable this behavior and have specific shorcuts for switching between tabs (e.g. Alt+Tab).

    If I could somehow achieve this, it would be a perfect solution for what you describe. Perhaps it would be easier to achieve in Xmonad than in i3 due to the scriptability.

    1. 2

      When I’m on the right tab and want to go to the left (window, container), but without first switching to the left tab, I’ve got no way to do that.

      You can first move the focus to the parent (https://i3wm.org/docs/userguide.html#_focusing_moving_containers), and then move left.

      1. 2

        You’re right, thanks for clarification. In any case, this is still one more step than I would want it to be.

      2. 2

        Damn, I completely forgot about scratchpad workspace!

      1. 8

        Gawk has all of these. Don’t port anything.

        At some point, however, awk’s limitations start to show. It has no real concept of breaking files into modules, it lacks quality error reporting, and it’s missing other things that are now considered fundamentals of how a language works.

        1. 3

          GAWK is not portable. You could possibly say “neither is Python”, but I would bet that Python is more available than GAWK. and even if it isnt, if youre going to have to install a package anyway, wouldnt a butcher knife be better than a plastic kids knife?

          I like AWK, I have used it for many years and would consider myself an AWK expert. But when the question is “what is the right tool for the job”? The answer is rarely AWK or GAWK.

          1. 10

            The right tool is obviously Perl in this case.

            1. 7

              and the tool is a2p since forever.

              1. 2

                Came here to say that very thing. The syntax maps more precisely and the idioms fit more completely thanks to Perl’s history for this almost exact purpose. The right tool for the right job.

              2. 6

                What do you mean “gawk is not portable”? Name one platform that has awk and python that does not have gawk?

                The point is you can either spend your time rewriting or you can just keep using the same code with extensions.

                And if you really really want to rewrite, Perl is a lot closer. This whole article just seems like someone who has arbitrarily decided that python is a “real” language so it’s inherently better to use it.

                1. 8

                  The author blurp has:

                  He has been programming Python since 1999

                  Looks like a case of hammer, nail to me.

                  (and the examples with the yields only convince me more that python is not the better choice)

                  1. 2

                    To be fair, I know far more carpenters with Python hammers than with Awk hammers.

                    I myself have but a ball peen Awk hammer, compared to my sledge Python hammer. So for really stubborn nails, Python is the better choice for me.

                    1. 1

                      I’ve been using Awk for even longer though.

                      The story in https://opensource.com/article/19/2/drinking-coffee-awk was in 1996.

                    2. 0

                      Um, Debian, BSD? should I go on?

                      1. 3

                        I suppose you mean that gawk features are not portable among the default awk on different OSes, so you shouldn’t use them and pretend that the script will work on any awk. That is totally true.

                        But the OP likely means that you can use gawk explicitly, treating it as a separate language. Gawk is available on almost all Unix OSes, so it is portable.

                        1. 2

                          My point is if your going to have to install a package, you might as well install $proper_programming_language instead of AWK. Unless what you need can be easily done with GAWK alone, its not really worth using.

                          Keep in mind that even with GAWK, proper support for indexed arrays is not available, nor first class functions, private variables in AWK are footguns, no HTTP client, no JSON, etc.

                1. 12

                  NO tags, categories, taxonomy

                  I get that maybe you dont like Hugo because it has too many features. In general I prefer smaller faster software, many times at the expense of features.

                  But to me taxonomy is essential, its the killer feature, its what separates a real “SSG” from the others. Jekyll lack of taxonomy is literally the one reason I stopped using Jekyll:

                  https://github.com/jekyll/jekyll/issues/6952

                  It was a pain to switch to Hugo, but that one feature made it worth it. Before, I would spend so much time deciding on a category for each post. Since the categories were assigned via folders, you could only choose one category per post. Now with Hugo, I can put one, two or how many I want categories on each post.

                  1. 7

                    In soupault, I went for allowing the user to extract and export any metadata from pages to JSON and make their own taxonomies from it (like microformats). But, the reason it needs no front matter is that it can look inside HTML.

                    If “no front matter” is a hard design goal, it may be tricky to apply that idea to underblog. But, maybe there are other “natural” metadata sources one can use instead? Random idea: put a page in a subdirectory to set its category, and symlink it to other subdirectories to add secondary categories.

                    In any case, the great thing about SSGs compared to server-side web software is that you can safely use an unmaintained SSG. Their proliferation is not harmful and stopping updating/developing further is not an irresponsible act. If for some people underblog’s feature set and workflow is exactly what they want, nothing wrong with it.

                    1. 1

                      Random idea: put a page in a subdirectory to set its category, and symlink it to other subdirectories to add secondary categories.

                      This is interesting, thank you. I do believe this sort of thing fits nicely with the underlying philosophy, because it introduces no additional complexity and is absolutely transparent. Love it. Added an issue to discuss with other contributors.

                      Thank you!

                      1. 1

                        Good luck getting that working on Windows…

                    2. 3

                      I understand, taxonomy is essential to me too, in many cases. That’s why I’m using Hugo on three different projects.

                      But every once in a while I need to make a small blog. Or recommend a simple way to generate a blog from Markdown. And I can’t really recommend neither Jekyll nor Hugo when the person wants to basically make a site out of a bunch of md files. I don’t want to dump config files, front matter, layouts/themes and plugins/dependencies on them. At that point, I’d rather recommend them to try Ghost or something.

                      The goal of underblog is to provide a way to do it without learning anything new. No new concepts, no multi-layered processing, etc.

                      So, while taxonomy is essential for many projects, the mental and technical overhead of Hugo/Jekyll is also a thing to consider. Underblog is not an alternative to Jekyll or Hugo, as much as a bicycle is not an alternative to a container ship.

                      1. 1

                        Re Underblog. Love the name. :)

                        1. 2

                          Thanks :) My first idea was “shyblog”.

                      2. 2

                        Time to plug my static site generator :) https://github.com/xiaq/genblog

                        It is also in the realm of minimalist static site generator, but it generates an index page for each category. Besides this, there are 2 other differences with the tool in the post:

                        • It does not auto discover posts, but always relies on an index.toml file, which is a “manifest” of everything. This is perhaps similar to many established generators, but it goes to the extreme of not assuming any filename (other than index.toml) and require every filename to be specified in index.toml.

                        • It does not compile Markdown; rather, it expects each post to be HTML fragments and merely inserts them verbatim into the template. You are free to use whichever Markdown compiler you like. I am not aware of any established generators that does this; and IMO it is the correct design choice to decouple from Markdown compilation.

                        I haven’t written any doc for this tool yet, but if you are interested, take a look at https://github.com/elves/elvish/tree/master/_website, starting with Makefile (which does the Markdown compilation using a combination of pandoc and some adhoc macro stuff) and index.toml (the manifest file).

                        1. 1

                          Huh, I have tags and categories on my Jekyll blog. Although the tags I guess I wrote custom plugins for, ..and I also wrote a custom Generator for the category pages .. and I use jekyll-pagination. 😅 Jekyll is a bit more like Jenkins; it comes with little in core and you need plugins to make it really do anything. I don’t really have a problem with that architecture honestly.

                          To be fair to the author, it looks like this is a little experiment to get a custom generator up for some personal or professional site. Going back, I really wish I had written my own instead of using Jekyll. It gave me some things out of the box or with plugins, but I was wrestling a lot with it .. and liquid templates are garbage!

                          If this is a small personal project, I’m sure it will grow to gradually fulfill the author’s needs.

                          1. 0

                            This is pretty much what I wanted to say. Jekyll is minimalist, but, it’s Ruby, and easy enough to extend.

                            For categories I’ve been using this plugin0, last updated in 2012 but still works. It generates a page for each category. I also have a page with some Liquid I wrote that lists every category1.

                        1. 10

                          I’d love to see some code snippets to help me understand what made Nim so effortless as compared to a ‘scripting’ (god I hate that term :) language.

                          1. 7

                            I can give a couple small examples of toy tings I’ve done at work:

                            This is a program that prints out each directory in $PATH on it’s own line.

                            import os, strutils
                            
                            var path = getEnv("PATH")
                            echo path.replace(";", "\r\n")
                            

                            This is a program the spits out 5 lines of the character “-”, with a specified background color.

                            import terminal, strutils, os, tables
                            var color = "green" 
                            
                            var colorTable = newTable[string, BackgroundColor]([
                                ("black", bgBlack),          
                                ("red", bgRed),                 
                                ("green", bgGreen),               
                                ("yellow", bgYellow),              
                                ("blue", bgBlue),                
                                ("magenta", bgMagenta),             
                                ("cyan", bgCyan),                
                                ("white", bgWhite)
                            ])
                            
                            
                            if paramCount() > 0: 
                                color = paramStr(1)
                            if color.toLower in colorTable:
                                setBackgroundColor(stdout, colorTable[color], false)
                            
                            for i in 0..5:
                                echo "-".repeat(terminalWidth() - 1)
                            resetAttributes(stdout)
                            

                            The key point as far as executables are concerned is that, unlike Go, C#, or Java, is that both of these files are one invocation of nim c filename.nim away from being executables that I can then copy into my $PATH. Compilation times for me are under 10 seconds, which isn’t instant, but is faster than Go or C# in this case. VS Code gives live error updates as well.

                            Go, for comparison, requires that you have a func main(), and that there is only one instance of main in a given folder. C# requires either MSBuild or understanding how to pass dependencies to csc.exe, and then you usually have to have a lot of DLLs around. Python and Ruby can both use Shebang files to achieve similar concision, but then if you have multiple files, shebang lines don’t bundle them up for you, and you’re outside of the easy/obvious path for executing code in those languages.

                            1. 7

                              Compilation times for me are under 10 seconds

                              Something is wrong here. If it takes multiple seconds to compile these examples (or any similar relatively simple “script”), can you check with nim -v if your compiler is compiled in the release mode? You should see the line active boot switches: -d:release in there.

                              I have a ~10 year old CPU and I have never seen such slow compilation times.

                              1. 4

                                These compilation times are on Windows, running inside the VS Code PowerShell prompt.

                                I’ll check more next time I’m editing the programs, but like I said, they feel pretty zippy compared to Go’s compile times.

                                1. 1

                                  running inside the VS Code PowerShell prompt

                                  Isn’t that a Javascript application? Does the compiler run in Javascript as a result or does it run it as native code? I’m curious since I figure Nim-to-native-code would be best way to benchmark the compiler on a hunch that they optimized the Nim-to-C-for-LLVM part the most.

                                  1. 1

                                    There sounds like something really wrong with your Go setup…

                                    1. 2

                                      Just fired up a Windows VM to see if Go compile times on it were awful, still well under a second for the 10 line example program (reading path, etc).

                                2. 3

                                  Compilation times for me are under 10 seconds, which isn’t instant, but is faster than Go or C# in this case.

                                  I don’t know about C#, but I’d be surprised if compiling a similar go program took 10s or more.

                                  Doesn’t take away how good an alternative Nim can be for short programs like these.

                                  1. 2

                                    Go is one invocation of go build filename.go away from being an executable that you can copy into your path.

                                    package main
                                    
                                    import (
                                    	"fmt"
                                    	"os"
                                    	"strings"
                                    )
                                    
                                    func main() {
                                    	path := os.Getenv("PATH")
                                    	fmt.Println(strings.Replace(path, ":", "\r\n", -1))
                                    }
                                    

                                    Which compiled on my machine in 0.239s total. I have been a fan of/follow nim since it was nimrod, it is awesome, just wanted to clear up that point.

                                    1. 1

                                      For reference, the nim code

                                      import os, strutils
                                      
                                      var path = getEnv("PATH")
                                      echo path.replace(":", "\r\n")
                                      

                                      took .885s to compile (more than 3x slower than Go).

                                      1. 2

                                        Ah, perceptions for me might be off. The Nim compiler is a lot noisier than the Go one.

                                        1. 2

                                          Could also be a Windows thing – never used the go toolchain on windows.

                                          1. 1

                                            Yeah. Well, I’ve almost never witnessed subsecond compiles for the Go or Nim toolchains. My powerful hardware is on Windows, where file access seems to be just slow enough to make things slower. My Linux hardware is an oldish VPS that I probably need to migrate to a newer machine, and which is sharing hardware with other VMs.

                                      2. 0

                                        Now, can you have 3 of those, all with “func main” in them, that can depend on other files in the same folder, without having to massage how you ask Go to build the files?

                                        It’s not that you can’t make small things in Go, just that it’s easier to do so in Nim. 10 lines vs 3 lines, in this case.

                                        At any rate, if you like Go, feel free to keep using it. I happen to like Nim’s ergonomics better.

                                        1. 1

                                          10 lines vs 3 lines, in this case.

                                          Of which I ended up writing 4 (could have been 3 if I used an auto-closer for {}). With my very basic tooling, in an esoteric editor (Kakoune), imports were added automatically, as was package main. So in terms of ergonomics, it was basically a wash.

                                          Now, can you have 3 of those, all with “func main” in them, that can depend on other files in the same folder, without having to massage how you ask Go to build the files?

                                          No… but why would I want to? I mean, as criticisms go, I don’t entirely understand this one. The use case is writing multiple files that act as both entry-points and libraries in the same directory with circular dependencies?

                                          1. 2

                                            It’s less “Go sucks” and more “I like how Nim structures its projects in the small better than Go”

                                            I don’t think Nim makes it easy to get circular dependencies.

                                            For me, it’s like getting a /cmd directory without having to manange two levels of directory heirarchry to do so.

                                            My first real project in Nim was https://idea.junglecoder.com/view/idea/277 which was half the lines of Go version. That was mostly down the Nim standard library is containing a type to do what I wanted (a key/value collection that remembers insertion order). In Go I had to roll my own.

                                            I will fully admit to not being the world’s most effective Go programmer. Nim fits how I think a bit more comfortably, and it puts enough less friction into building small things that I’ve made a lot more of them in Nim of late.

                                      3. 1

                                        The key point as far as executables are concerned is that, unlike Go, C#, or Java, is that both of these files are one invocation of nim c filename.nim away from being executables that I can then copy into my $PATH. Compilation times for me are under 10 seconds, which isn’t instant, but is faster than Go or C# in this case. VS Code gives live error updates as well.

                                        OK this makes sense to me, and since we’re talking about preferences and subjective squishy things like what it ‘feels’ like to develop in a language, I won’t argue your point, but compare and contrast that experience to no compile step at all and I see a fairly strong argument for continuing to love my Python while respecting that these tools are all incredible in their own right and do indeed provide a considerably faster execution path for the majority of code cases.

                                        1. 1

                                          Indeed. If you’re wanting to run/compile at the same time, there is nim c -r filename.nim which will run it after compilation.

                                          If you’re already happy with Python, however, Nim may not do anything too amazing for you, other than potentially being a bit easier to distribute, (that’s just a guess on my part, however. I’ve had issues trying to distribute python programs in the past, specifically pygame, but it’s been a while since I’ve tried to distribute python software).

                                          Nim also has static typing baked in, which I like. I know python has some static typing tools built in these days as well, but yeah.

                                          1. 2

                                            Python’s distribution story is a known issue. There are various efforts afoot to improve things, but none of them are in core.

                                            There’s no doubt that languages like Nim, Go, C/C++ and Rust have advantages in that department.

                                            Tools are all about trade-offs :)

                                    1. 4

                                      If you are looking for alternative interactive shells, also check out Elvish: https://elv.sh https://github.com/elves/elvish

                                      1. 7

                                        The article has a fair point. Sure, “naming things” is usually not about naming but abstracting, and once you have good abstractions, good names follow.

                                        But there are a few more cases where naming is genuinely hard. Sometimes there is a gap between the formal language (programming languages) and informal language (English). In well-established subfields like web programming, you have a term for almost anything you will need frequently - “validation”, “middleware”, “render”, “routing” - these all have well-defined meanings. Now imagine that you are doing web programming before someone has invented those terms. What do you do? You have to invent the terminology first and it’s not straightforward.

                                        Sometimes naming is hard because your program involves multiple domains, and terms have different meanings in those domains. For instance, “task manager” can either mean a GUI application to manage tasks on an OS, or a process-internal scheduler to manage the lifecycle of some repeating tasks. Now imagine you are writing a program that deals with both types of task managers. How do you disambiguate? It’s hard.

                                        Still, the point of the article is quite valid, and I would say covers more than half of the case where “naming is hard”. But still, there are times when naming is genuinely hard, because language is hard.

                                        1. 1

                                          Impressed. So a few questions. How did you come up with the name ? Also I think text is simple, did you run into a lot of areas were you thought, hey wouldn’t it be nice if I could pipe defined objects? Do you think this targets “power users” the same way previous unix shells do?

                                          1. 3

                                            I am glad you asked all these questions! Let me answer the easier two questions first:

                                            • The name comes from Roguelike games, where elven items are reputed for their high quality. You can read about the name here.

                                            • Elvish definitely targets power users. In fact, it aims to unleash even more power than traditional Unix shells - there are a lot more interesting things you can do with a powerful language, and a API for the line editor that takes advantage of advanced language features.

                                            Onto the hardest question about pipes. Interestingly enough, the need to pipe objects actually arose from nothing more complex than trying to process text data - but not just text, but a table of text. Such needs are surprisingly common: the output of ls -l and ps are all such tables.

                                            Now traditionally, to process tables, you assume a certain structure: each line represents a row in the table, and each whitespace-delimited field represents a column in a row. If you only care about entire rows, you can just use line-oriented commands like grep and sed; if you care about the columns, you have commands like cut and awk.

                                            This traditional Unix solution is famed for its simplicity. But it only works under two pretty strict assumptions: a) your rows never contains newlines, and b) your columns never contain whitespaces. You can go pretty far with assumption a), but assumption b), not really. Let’s just see the output of ls -l and ps:

                                            • Filenames containing spaces are not that uncommon, so the fields in the output of ls -l can embed spaces. Some people think that those filenames are the problem, but I strongly disagree: the only characters disallowed in filenames in Unix are / and \0, and if your tool cannot handle a valid filename, it’s the tool that is broken.

                                            • The output of ps contains the command line used to start each process, and they also very frequently contain spaces.

                                            There are ways to solve it, of course. For instance, by quoting the fields containing whitespaces. However, at this point, you can no longer do a simple string split to determine the structure of your table, and that’s all awk and cut are trained to do. The simplicity is already lost. Everything should be made as simple as possible, but no simpler.

                                            Now, let’s take a step back and assume that we do have versions of cut and awk that understands quoted fields. Problem solved, right? No. This is still your typical awk program:

                                            { print "$5,$6"; count[$2]++ }
                                            

                                            What are $5, $6, and $2? The answer is that, they are the 5th, 6th and 2nd fields of the input. It doesn’t tell you what it actually is - they could be some kind of filenames, usernames, PIDs, permission bits, anything. Now imagine that your program is full of those. It gets messy very fast. Worse, some developers might make changes to the output format. Now all your awk programs are broken.

                                            The antidote to the problem is named fields. Imagine each field advertises its own name, like “pid”, “filename”, “username”. Your awk programs suddenly look like this:

                                            { print "$pid,$username"; count[$filename]++ }
                                            

                                            Isn’t that much easier to read?

                                            Now let’s take a step back. What have we done? We have reinvented two things - lists and maps. :)

                                            I hope I have convinced you the necessity of passing objects in pipes - what I call “value pipes”. Still, there are multiple ways to implement it. You can still use the traditional byte-oriented pipe as transport, and encode all your data structures. After all, Tcl gets away with everything is a string, and so can everyone else. Elvish doesn’t use this approach, but instead it passes those objects directly in a Go channel. This limits the value pipes to be in-process, of course, but you can always do explicit serialization and deserialization. For instance, in Elvish the put command writes a value to the value pipe (think of it as “echo, but just for value pipe”). Doing this won’t work:

                                            put [a list] [&a=map] | some-external
                                            

                                            However, you can simply add an additional serialization step that converts the values in value pipe to JSON:

                                            put [a list] [&a=map] | to-json | some-external
                                            

                                            The deserialization command is, unsurprisingly, from-json. In fact, the first demo on the homepage shows how to deserialize the JSON obtained from a curl call.

                                            I hope that answers your questions! I’ve probably written too much in this thread :)

                                          1. 4

                                            A Turin turambar turún’ ambartanen. Another shell that isn’t shell, shells that aren’t shells aren’t worth using because shell’s value is it’s ubiquity. Still, interesting ideas.

                                            This brought to you with no small apology to Tolkien.

                                            1. 13

                                              I’ve used the Fish shell daily for 3-4 years and find it very much worth using, even though it isn’t POSIX compatible. I think there’s great value in alternative shells, even if you’re limited in copy/pasting shell snippets.

                                              1. 12

                                                So it really depends on the nature of your work. If you’re an individual contributor, NEVER have to do devops type work or actually operate a production service, you can absolutely roll this way and enjoy your highly customized awesomely powerful alternative shell experience.

                                                However, if you’re like me, and work in environments where being able to execute standardized runbooks is absolutely critical to getting the job done, running anything but bash is buying yourself a fairly steady diet of thankless, grinding, and ultimately pointless pain.

                                                I’ve thought about running an alternative shell at home on my systems that are totally unconnected with work, but the cognitive dissonance of using anything other than bash keeps me from going that way even though I’d love to be using Xonsh by the amazing Anthony Scopatz :)

                                                1. 5

                                                  I’d definitely say so – I’d probably use something else if I were an IC – and ICs should! ICs should be in the habit of trying lots of things, even stuff they don’t necessarily like.

                                                  I’m a big proponent of Design for Manufacturing, an idea I borrow from the widgety world of making actual things. The idea, as defined by an MFE I know, is that one should build things such that: “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.”

                                                  For a delivery-ops guy like me, working in a tightly regulated, safety-critical world of Healthcare, having reproducible, reliable architecture, that’s cheap to replace and repair is critical. Adding a new shell doesn’t move in that needle towards reproducibility, so it’s value has to come from reliability or cheapness, and once you add the fact that most architectures are not totally homogeneous, the cost goes up even more.

                                                  That’s the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.

                                                  1. 2

                                                    “The design lends itself to being easily reproduced identically in a reliable, cost-effective manner.” “That’s the hill new shells have to climb,”

                                                    Or, like with the similar problem posed by C compilers, they just provide a method to extract to whatever the legacy shell is for widespread, standard usage.

                                                    EDIT: Just read comment by @ac which suggested same thing. He beat me to it. :)

                                                    1. 2

                                                      I’ve pondered about transpilers a bit before, for me personally, I’ve learned enough shell that it doesn’t really provide much benefit, but I like that idea a lot more then a distinct, non-compatible shell.

                                                      I very much prefer a two-way transpiler. Let me make my old code into new code, so I can run the new code everywhere and convert my existing stuff to the new thing, and let me go back to old code for the machines where I can’t afford to figure out how to get new thing working. That’s a really big ask though.

                                                      The way we solve this at $work is basically by writing lots of very small amounts of shell, orchestrated by another tool (ansible and Ansible Tower, in our case). This covers about 90% of the infrastructure, with the remaining bits being so old and crufty (and so resource-poor from an organization perspective) that bugs are often tolerated rather than fixed.

                                                  2. 4

                                                    The counter to alternative shells sounds more like a reason to develop and use alternative shells that coexist with a standard shell. Maybe even with some state synchronized so your playbooks don’t cause effects the preferred shell can’t see and vice versa. I think a shell like newlisp supporting a powerful language with metaprogramming sounds way better than bash. Likewise, one that supports automated checking that it’s working correctly in isolation and/or how it uses the environment. Also maybe something on isolation for security, high availability, or extraction to C for optimization.

                                                    There’s lots of possibilities. Needing to use stuff in a standard shell shouldn’t stop them. So, they should replace the standard shell somehow in a way that still lets it be used. I’m a GUI guy whose been away from shell scripting for a long time. So, I can’t say if people can do this easily, already are, or whatever. I’m sure experts here can weigh in on that.

                                                  3. 7

                                                    I work primarily in devops/application architecture – having alternative shells is just a big ol’ no – tbh I’m trying to ween myself off bash 4 and onto pure sh because I have to deal with some pretty old machines for some of our legacy products. Alternative shells are cool, but don’t scale well. They also present increased attack surface for potential hackers to privesc through.

                                                    I’m also an odd case, I think shell is a pretty okay language, wart-y, sure, but not as bad as people make it out to be. It’s nice having a tool that I can rely on being everywhere.

                                                    1. 14

                                                      I work primarily in devops/application architecture

                                                      Alternative shells are cool, but don’t scale well.

                                                      Non-ubiquitous shells are a little harder to scale, but the cost should be controllable. It depends on what kind of devops you are doing:

                                                      • If you are dealing with a limited number of machines (machines that you probably pick names yourself), you can simply install Elvish on each of those machines. The website offers static binaries ready to download, and Elvish is packaged in a lot of Linux distributions. It is going to be a very small part of the process of provisioning a new machine.

                                                      • If you are managing some kind of cluster, then you should already be doing most devops work via some kind of cluster management system (e.g. Kubernetes), instead of ssh’ing directly into the cluster nodes. Most of your job involves calling into some API of the cluster manager, from your local workstation. In this case, the number of Elvish instances you need to install is one: that on your workstation.

                                                      • If you are running some script in a cluster, then again, your cluster management system should already have a way of pulling in external dependencies - for instance, a Python installation to run Python apps. Elvish has static binaries, which is the easiest kind of external dependency to deal with.

                                                      Of course, these are ideal scenarios - maybe you are managing a cluster but it is painful to teach whatever cluster management system to pull in just a single static binary, or you are managing some old machines with an obscure CPU architecture that Elvish doesn’t even cross-compile to. However, those difficulties are by no means absolute, and when the benefit of using Elvish (or any other alternative shell) far outweighs the overheads, large-scale adoption is possible.

                                                      Remember that bash – or every shell other than the original bourne shell - also started out as an “alternative shell” and it still hasn’t reached 100% adoption, but that doesn’t prevent people from using it on their workstation, servers, or whatever computer they work with.

                                                      1. 4

                                                        All good points. I operate on a couple different architectures at various scales (all relatively small, Xe3 or so). Most of the shell I write is traditional, POSIX-only bourne shell, and that’s simply because it’s everywhere without any issue. I could certainly install fish or whatever, or even standardized version of bash, but it’s an added dependency that only provides moderate convenience at the cost of another ansible script to maintain, and increased attack surface.

                                                        The other issue is that ~1000 servers or so have very little in common with each other, about 300 of them support one application, that’s the biggest chunk, 4 environments of ~75 machines each, all more or less identical.

                                                        The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy. These are all legacy applications, none of them get any money for new work, they’re all total maintenance mode, any time I spend on them is basically time lost from the business perspective. I definitely don’t want to knock alternative shells as a tool for an individual contributor, but it’s ultimately a much simpler problem for me to say, “I’m just going to write sh” then “I’m going to install elvish across a gagillion arches and hope I don’t break anything”

                                                        We drive most cross-cutting work with ansible (that Xe3 is all vms, basically – not quite all, but like 98%), bash really comes in as a tool for debugging more than managing/maintaining. If there is an issue across the infra – say like meltdown/spectre, and I want to see what hosts are vulnerable, it’s really fast for me (and I have to emphasize – for me – I’ve been writing shell for a lot of years, so that tweaks things a lot) to whip up a shell script that’ll send a ping to Prometheus with a 1 or 0 as to whether it’s vulnerable, deploy that across the infra with ansible and set a cronjob to run it. If I wanted to do that with elvish or w/e, I’d need to get that installed on that heterogenous architecture, most of which my boss looks at as ‘why isn’t Joe working on something that makes us money.’

                                                        I definitely wouldn’t mind a better sh becoming the norm, and I don’t want to knock elvish, but from my perspective, that ship has sailed till it ports, sh is ubiquitous, bash is functionally ubiquitous, trying to get other stuff working is just a time sink. In 10 years, if elvish or fish or whatever is the most common thing, I’ll probably use that.

                                                        1. 1

                                                          The other 700 are a mish mash of versions of different distros, different OSes, different everything, that’s where /bin/sh comes in handy.

                                                          So, essentially, whatever alternative is built needs to use cross-platform design or techniques to run on about anything. Maybe using cross-platform libraries that facilitate that. That or extraction in my other comment should address this problem, eh?

                                                          Far as debugging, alternative shells would bring both a cost and potential benefits. The cost is unfamiliarity might make you less productive since it doesn’t leverage your long experience with existing shell. The potential benefits are features that make debugging a lot easier. They could even outweigh cost depending on how much time they save you. Learning cost might also be minimized if the new shell is based on a language you already know. Maybe actually uses it or a subset of it that’s still better than bash.

                                                      2. 6

                                                        My only real beef with bash is its array syntax. Other than that, it’s pretty amazing actually, especially as compared with pre bash Bourne Shells.

                                                        1. 4

                                                          Would you use a better language that compiles to sh?

                                                          1. 1

                                                            Eh, maybe? Depends on your definition of ‘better.’ I don’t think bash or pure sh are all that bad, but I’ve also been using them for a very long time as a daily driver (I write more shell scripts then virtually anything else, ansible is maybe a close second); so I’m definitely not the target audience.

                                                            I could see if I wanted to do a bunch of math, I might need use something else, but if I’m going to use something else, I’m probably jumping to a whole other language. Shell is in a weird place, if the complexity is high enough to need a transpiler, it’s probably high enough to warrant writing something and installing dependencies.

                                                            I could see a transpiler being interesting for raising that ceiling, but I don’t know how much value it’d bring.

                                                      3. 10

                                                        Could not disagree more. POSIX shell is unpleasant to work with and crufty; my shell scripting went through the roof when I realized that: nearly every script I write is designed to be launched by myself; shebangs are a thing; therefore, the specific language that an executable file is written in is very, very often immaterial. I write all my shell scripts in es and I use them everywhere. Almost nothing in my system cares because they’re executable files with the path to their interpreter baked in.

                                                        I am really pleased to see alternative non-POSIX shells popping up. In my experience and I suspect the experience of many, the bulk of the sort of scripting that can make someone’s everyday usage smoother need not look anything like bash.

                                                        1. 5

                                                          Truth; limiting yourself to POSIX sh is a sure way to write terribly verbose and slow scripts. I’d rather put everything into a “POSIX awk” that generates shell code for eval when necessary than ever be forced to write semi-complex pure sh scripts.

                                                          bash is a godsend for so many reasons, one of the biggest being process substitution feature.

                                                          1. 1

                                                            For my part, I agree – I try to generally write “Mostly sh compatible bash” – defaulting to sh-compatible stuff until performance or maintainability warrant using the other thing. Most of the time this works.

                                                            The other mitigation is that I write lots of very small scripts and really push the worse-is-better / lots of small tools approach. Lots of the scripting pain can be mitigated by progressively combining small scripts that abstract over all the details and just do a simple, logical thing.

                                                            One of the other things we do to mitigate the slowness problem is to design for asynchrony – almost all of the scripts I write are not time-sensitive and run as crons or ats or whatever. We kick ‘em out to the servers and wait the X hours/days/whatever for them to all phone home w/ data about what they did, work on other stuff in the meantime. It really makes it more comfortable to be sh compatible if you can just build things in a way such that you don’t care if it takes a long time.

                                                            All that said, most of my job has been “How do we get rid of the pile of ancient servers over there and get our assess to a disposable infrastructure?” Where I can just expect bash 4+ to be available and not have to worry about sh compatibility.

                                                          2. 1

                                                            A fair cop, I work on a pretty heterogenous group of machines, /bin/sh works consistently on all of them, AIX, IRIX, BSD, Linux, all basically the same.

                                                            Despite our (perfectly reasonable) disagreement, I am also generally happy to see new shells pop up. I think they have a nearly impossible task of ousting sh and bash, but it’s still nice to see people playing in my backyard.

                                                          3. 6

                                                            I don’t think you can disqualify a shell just because it’s not POSIX (or “the same”, or whatever your definition of “shell” is). The shell is a tool, and like all tools, its value depends on the nature of your work and how you decide to use it.

                                                            I’ve been using Elvish for more than a year now. I don’t directly manage large numbers of systems by logging into them, but I do interact quite a bit with services through their APIs. Elvish’s native support for complex data structures, and the built-in ability to convert to/from JSON, makes it extremely easy to interact with them, and has allowed me to build very powerful toolkits for doing my work. Having a proper programming language in the shell is very handy for me.

                                                            Also, Elvish’s interactive experience is very customizable and friendly. Not much that you cannot do with bash or zsh, but much cleaner/easier to set up.

                                                            1. 4

                                                              I’ve replied a bunch elsewhere, I don’t mean to necessarily disqualify the work – it definitely looks interesting, and for an individual contributor somewhere who doesn’t have to manage tools at scale, or interact with tools that don’t speak the JSON-y api it offers, etc – that’s where it starts to get tricky.

                                                              I said elsewhere in thread, “That’s [the ubiquity of sh-alikes] the hill new shells have to climb, they have to get over ‘sh is just easier to use, it’s already there.’ That’s a very big hill.”

                                                              I’d be much more interested if elvish was a superset of sh or bash. I think that part of the reason bash managed to work was that sh was embedded underneath, it was a drop-in replacement. If you’re a guy who, like me, uses a lot of shell to interact with systems, adding new features to that set is valuable, removing old ones is devastating. I’m really disqualifying (as much as I am) on that ground, not just that it’s not POSIX, but that it is less-than-POSIX with the same functionality. That keeps it out of my realm.

                                                              Now this may be biased, but I think I’m the target audience in terms of adoption – you convince a guy like me that your shell is worth it, and I’m going to go drop it on my big pile of servers where-ever I’m working. Convincing ICs who deal with their one machine gets you enough adoption to be a curiousity, convince a DevOps/Delivery guy and you get shoved out to every new machine I make and suddenly you’ve got a lot of footprint that someone is going to have to deal with long after I’m gone and onto Johhny Appleshelling the thing at whatever poor schmuck hires me next.

                                                              Here’s what I’d really like to see, a shell that offers some of these JSON features as an alternative pipe (maybe ||| is the operator, IDK), adds some better numbercrunching support, and maybe some OO features. All while remaining a superset of POSIX. That’d make the cost of using it very low, which would make it easy to justify adding to my VM building scripts. It’d make the value very high (not having to dip out to another tool to do some basic math would be fucking sweet,), having OO features so I could operate on real ‘shell objects’ and JSON to do easier IO would be really nice as well. Ultimately though you’re fighting uphill against a lot of adoption and a lot of known solutions to these problems (there are patterns for writing shell to be OOish, there’s awk for output processing, these are things which are unpleasant to learn, but once you do, the problem JSON solves drops to a pretty low priority).

                                                              I’m really not trying to dismiss the work. Fixing POSIX shell is good work, it’s just not likely to be successful by replacing. Improving (like bash did) is a much better route, IMO.

                                                            2. 2

                                                              I’d say you’re half right. You’ll always need to use sh, or maybe bash, they’re unlikely to disappear anytime soon. However, why limit yourself to just sh when you’re working on your local machine? You could even take it a step further and ask why are you using curl locally when you could use something like HTTPie instead? Or any of the other “alternative tools” that make things easier, but are hard to justify installing everywhere. Just because a tool is ubiquitous does not mean it’s actually good, it just means that it’s good enough.

                                                              I personally enjoy using Elvish on my local machines, it makes me faster and more efficient to get things done. When I have to log into a remote system though I’m forced to use Bash, it’s fine and totally functional, but there’s a lot of stupid things that I hate. For the most ridiculous and trivial example, bash doesn’t actually save it’s history until the user logs out, unlike Elvish (or even IPython) which saves it after each input. While it’s a really minor thing, it’s really, really, really useful when you’re testing low level hardware things that might force an unexpected reboot or power cycle on a server.

                                                              I can’t fault you if you want to stay POSIX, that’s a personal choice, but I don’t think it’s fair to write off something new just because there’s something old that works. With that mindset we’d still be smashing two rocks together and painting on cave walls.

                                                            1. 9

                                                              Want to find the magical ffmpeg command that you used to transcode a video file two months ago?

                                                              Just dig through your command history with Ctrl-R. Same key, more useful.

                                                              (To be fair, you can do this in bash with history | grep ffmpeg, but it’s far fewer keystrokes in Elvish :)

                                                              Sorry, what? Bash has this by default as well (At least in Ubuntu, and every other Linux distribution I’ve used). ^r gives autocomplete on history by the last matching command.

                                                              1. 10

                                                                I hoped I had made it clear by saying “same key”. The use case is that you might have typed several ffmpeg commands, and with bash’s one-item-at-a-time ^R it is really hard to spot the interesting one. Maybe I should make this point clearer.

                                                                1. 6

                                                                  That’s handy, but it is easy to add this to bash and zsh with fzf:

                                                                  https://github.com/junegunn/fzf#key-bindings-for-command-line

                                                                  With home-manager and nix, enabling this functionality is just a one-liner:

                                                                  https://github.com/danieldk/nix-home/blob/f6da4d02686224b3008489a743fbd558db689aef/cfg/fzf.nix#L6

                                                                  I like this approach, because it follows the Unix approach of using small orthogonal utilities. If something better than fzf comes out, I can replace it without replacing my shell.

                                                                  Structured data in pipelines seems very nice though!

                                                                  1. 1

                                                                    What exactly does programs.fzf.enableBashIntegration do? I just enabled it, and it seems to have made no difference.

                                                                    1. 2

                                                                      https://github.com/rycee/home-manager/blob/05c93ff3ae13f1a2d90a279a890534cda7dc8ad6/modules/programs/fzf.nix#L124

                                                                      So, it should add fzf keybindings and completions. Do you also have programs.bash.enabled set to true so that home-manager gets to manage your bash configuration?

                                                                      1. 1

                                                                        programs.bash.enabled

                                                                        Ah, enabling that did the trick (no need to set initExtra). Thanks!

                                                                        I did however have to get rid of my existing bashrc/profile. Looks like I need to port that over to home-manager …

                                                                        1. 2

                                                                          Yeah, been there, done that. In the end it’s much nicer. Now when I install a new machine, I have everything set up with a single ‘home-manager switch’ :).

                                                                2. 4

                                                                  I’ve always found bash’s ctrl+r to be hard to use properly, in comparison elvish’s history (and location) matching is like a mini-fzf, it’s very pleasant to use.

                                                                  1. 1

                                                                    I think the idea here is that it shows you more than one line of the list at once, while C-r is sometimes a bit fiddly to get to exactly the right command if there are multiple matches.

                                                                    1. 1

                                                                      For zsh try «bindkey '^R' history-incremental-pattern-search-backward» in .zshrc. Now you can type e.g. «^Rpy*http» to find «python -m http.server 1234» in your history. Stil shows only one match, but it’s easier to find the right one.

                                                                      1. 1

                                                                        I use https://github.com/dvorka/hstr for history search on steroids and I am very happy with it.

                                                                      1. 3

                                                                        Arc was arguably “Scheme, but with a quasiquoted macro system.” Which is still a very good idea. I wish Racket wouldn’t be so heavy-handed with syntax transformers. But the rigidity has some nice benefits, like Racket’s interactive macro expansion system.

                                                                        It’s interesting to diff the evaluation model of Scheme against other dialects of Lisp. I think Scheme got it right with a single shared namespace for both functions and variables, but in Emacs Lisp it’s sort of nice to be able to name a function and a variable the same thing. After all, you usually want to write functionality directly related to the state that it’s manipulating, so it’s a natural fit.

                                                                        I managed to port Arc to JS: https://github.com/lumen-language/lumen/blob/f7bfd4dca71ed1e4eb380e7e819a302825d37936/arc.l#L2234-L2245

                                                                        The code is mostly a copy-paste from the original arc3.1 sources. It’s kind of amusing to open it in a separate tab and flip between them. https://github.com/arclanguage/anarki/blob/f01d3f9c661eed05511711a0f3388ca2a1d34fa2/news.arc#L400-L411

                                                                        In general, Scheme is really easy to implement, especially compared to CL and Elisp. I once wrote a scheme macro in elisp with a couple hours of effort.

                                                                        One trouble is that it’s hard to find good, production-grade, quality Lisp codebases. They exist, but you have to go digging for them. Abuse (a game engine) comes to mind: http://abuse.zoy.org/browser/abuse/trunk/data/lisp

                                                                        You can also add type inference with relatively little effort, which is just delightful. https://web.archive.org/web/20070610012057/http://www.cs.indiana.edu/classes/c311/

                                                                        https://web.archive.org/web/20070615124421fw_/http://www.cs.indiana.edu/classes/c311/a10.html

                                                                        (I’ve been studying that for a few weeks off and on and still don’t quite grok it, but that’s only due to my deficiencies rather than lisp’s.)

                                                                        1. 1

                                                                          Arc was arguably “Scheme, but with a quasiquoted macro system.” Which is still a very good idea. I wish Racket wouldn’t be so heavy-handed with syntax transformers. But the rigidity has some nice benefits, like Racket’s interactive macro expansion system.

                                                                          I am not familiar with the different flavors of macro systems (I am still learning Scheme’s); can you elaborate on the difference? By “heavy-handed” do you mean Racket’s macro system is too flexible or too inflexible?

                                                                          1. 1

                                                                            Whoa, I missed this reply. Sorry!

                                                                            Yes, Racket’s macro system is… well, you can do a lot with it, if you’re very smart and you have a lot of time.

                                                                            For example, here’s xdef from arc3.2, written in scheme: https://github.com/shawwn/arc3.2/blob/788c41f274116b276206475e521853d22657e195/ac.scm#L573-L579

                                                                            (define-syntax xdef
                                                                              (syntax-rules ()
                                                                                ((xxdef a b)
                                                                                 (let ((nm (ac-global-name 'a))
                                                                                       (a b))
                                                                                   (namespace-set-variable-value! nm a)
                                                                                   a))))
                                                                            

                                                                            Here it is in Lumen:

                                                                            (define-macro xdef (name value)
                                                                              (let-unique (nm)
                                                                                `(with ,nm (ac-global-name ',name)
                                                                                   (namespace-set-variable-value! ,nm ,value))))
                                                                            

                                                                            There are a few reasons Scheme-style is longer (all of them valid), but there are effective techniques for sidestepping the problems that Scheme tries to address. Arc uses those techniques well.

                                                                            I wrote that xdef macro in Lumen’s interactive docs, if you want to play with it:

                                                                            https://docs.ycombinator.lol/tutorial/macros

                                                                        1. 1

                                                                          Is there a changelog?

                                                                          1. 1

                                                                            If you mean a list of changes from the previous edition, I don’t think there is.

                                                                          1. 1

                                                                            I have the third edition on my bookshelf and it has been a good reference. I think this book may be directed more to people that have to implement Scheme than people learning it, e.g. there is a continuous reference to S5RS and ANSI Scheme differences [3rd edition]. At the end of the third edition there are solutions to selected exercises so it is good for checking some of your knowledge right away.

                                                                            edit: Also, I’m waiting for “The Little Schemer” so I’ll be able to actually compare the two later

                                                                            1. 1

                                                                              I am not trying to implement Scheme but I still find this book quite useful.

                                                                              I had some experience with functional programming, and was most curious in the Scheme-specific stuff, like continuations, Scheme’s flavor of macros, use of tail recursions, etc. Scheme is also famous for its minimalism, so I would also like to get a feeling of how the builtin facilities are designed and structured.

                                                                              I once read R6RS but found it a bit too abstract. I think this book actually covers more or less the same material as R6RS, and it’s probably fair to describe it as a pedagogical version of R6RS.

                                                                              Chapters 1-3 give you a very succinct yet deep overview of the language, and also provides exercises for you to check your understanding. I would say chapters 2 and 3 remind me of K&R, although I certainly wish there were more exercises.

                                                                              Chapters 4-11 of this book cover the builtin facilities. Again, it is like R6RS, but feels more pedagogical. I just finished chapter 3 so it’s just a general feeling.

                                                                              Chapter 12, “extended examples” has been lauded by many. I haven’t dived into that chapter yet, but again it looks quite educational.

                                                                              In summary, if you are in a similar situation like me - having some general experience with FP, wants to learn Scheme thoroughly, but find RnRS a bit too dense - this is the textbook I would recommend.

                                                                            1. 4

                                                                              Playing with Racket, and porting Git’s bash completion script to Elvish.

                                                                              1. 2

                                                                                Just played with Elvish. It crashed[1]; that’s what you get for using Go instead of something (with algebraic types) like Rust. :-P

                                                                                [1] runtime error: invalid memory address or nil pointer dereference

                                                                                1. 1

                                                                                  Can you file a bug on b.elv.sh about how to reproduce the crash? Thanks!

                                                                                2. 1

                                                                                  How is Elvish different to fish?

                                                                                  1. 2

                                                                                    It’s quite different. Fish has a clean non-POSIX syntax and sensible UI defaults, but in terms of programming capacity you are basically constrained to manipulating strings and (to some degree) arrays, which is roughly the same as bash.

                                                                                    Elvish has lexical scoping, namespaced modules, first-class function values (closures), nestable lists and maps, exceptions, etc. You may want to check out the page on its philosophy and maybe the language reference.

                                                                                    Elvish is also slightly more portable than Fish. It can run on Windows as a purely native executable, without Cygwin or msys2, although this support is still quite experimental at this moment.

                                                                                1. 4

                                                                                  MacBook Pro (Retina, 13-inch, Early 2015).

                                                                                  I bought a Mac almost exclusively for a better experience of reading text. The retina screen (not just the high definition, but also its brightness) and OS X’s font rendering engine made it actually pleasant to read text on a screen; especially so if you read a lot of Chinese text: with all those complex strokes in Chinese characters, reading Chinese in low-definition screens was never a comfortable experience [1].

                                                                                  [1] Anti-aliasing exists, but often works miserably for small Chinese text. A common workaround in Linux was to configure fontconfig to fall back to a bitmap font for smaller texts, but tweaking fontconfig is really a hassle; to make things worse, browsers often do their own font rendering which means that the workaround doesn’t work for browsers, which is where I happened to read most Chinese text.

                                                                                  1. 4

                                                                                    Anyone who has written or attempted to write a completion script would recognize this as a feat of engineering. Congrats!

                                                                                    1. 1

                                                                                      Thanks! It’s not done yet but I’m now confident it can be done, after plowing through those initial hurdles. It was fun at first, but after several weeks it became a chore.

                                                                                      I knew that completion was nasty, but it was even nastier than I expected. I should write this in another blog post, but I learned that the bash-completion project actually disregards some of the bash API and re-parses bash in bash!!! Argh.

                                                                                      I need a break now and will return to completion later. If anyone wants to help let me know :) I think it would be a fun task for people who like to program in APL or brainfuck :)

                                                                                      1. 2

                                                                                        I learned that the bash-completion project actually disregards some of the bash API and re-parses bash in bash!!!

                                                                                        That is… interesting. But not very surprising, considering that the complete -F/-C API is quite barebone: it just gives you a list of words, without any information about the syntactical structure.

                                                                                        You asked about zsh’s completion system, but I am not sure whether you would really want to emulate it. Putting aside the complexity of zsh’s programmable completion API, you would also need to emulate the zsh language, which is probably only a “fun task” using your definition - and I think you are already having enough of those by emulating bash :)

                                                                                        1. 1

                                                                                          Yeah that’s a good point. I have seen some zsh constructs in the completion scripts that I’m totally unfamiliar with. And I learned there are 2 completion APIs in zsh – and old one and a new one – although I don’t know much more than that.

                                                                                          So yeah it’s probably out of scope. But I got jealous of the the superior completions during my research :)

                                                                                          On top of that, zsh also doesn’t reprint the prompt every time you hit TAB, which is starting to annoy me about bash, now that I know there’s a nicer way to do it! I’m relying on GNU readline, and I’m not sure it has a way to avoid that. I think zsh has its own non-reusable terminal code that does all this.

                                                                                          1. 2

                                                                                            And I learned there are 2 completion APIs in zsh – and old one and a new one – although I don’t know much more than that.

                                                                                            Virtually all completion scripts nowadays use the “new” system. The old system was deprecated some 20 years ago, IIRC :)

                                                                                            I think zsh has its own non-reusable terminal code that does all this.

                                                                                            Yes, it’s called ZLE (Zsh Line Editor). GNU readline is pretty limited; if you are serious about interactive experience then at some point you will either want to make your own thing, or use a more advanced library. The most popular Python library these days is Python Prompt Toolkit, although it’s probably not super relevant as IIUC you are going to remove the Python runtime at some point?

                                                                                            FWIW, Elvish (unsurprising surprise plug :) also implements its own terminal magic in the edit/tty package. Writing to the terminal is actually pretty easy, you need to know just a few VT100 sequences: moving the cursor, clearing an area (very useful for incremental update). Reading is considerably harder, because the sequences are much more complicated, and different terminals send different sequences.

                                                                                            1. 1

                                                                                              The main reason I don’t want to write my own is because I need vi bindings (set -o vi), and I think other people want emac bindings.

                                                                                              I googled and found this: https://github.com/elves/elvish/issues/728

                                                                                              If I weren’t a vi user I would have more flexibility :)

                                                                                              I feel like the editing mode pretty closely coupled with the display issue? Or maybe readline has a hook where you can control how the completions are displayed below the prompt? I would like to reuse the keybindings but customize the display.

                                                                                              I looked at Python Prompt Toolkit about a year ago. I remember it being too “big” to reuse, but it looks like it’s about 30K lines of pure code, and depends on the wcwidth package. So it might be possible. It’s really less about the code size and more what “dialect” of Python it uses. I don’t use decorators, multiple inheritance, etc. and “OPy” will never support that.

                                                                                              Yeah I’m looking at the code right now, and it’s quite big and uses a lot of decorators. It’s unfortunate because the alternatives I looked at didn’t have vi bindings.

                                                                                              Nice post here:

                                                                                              http://ballingt.com/prompt-toolkit/

                                                                                              This whole blog from the author of the bpython shell has a lot of great stuff about terminals. I checked out the elvish code and will look more if I decide to do anything here (although I think I just have to live with readline for the time being).

                                                                                              1. 1

                                                                                                The main reason I don’t want to write my own is because I need vi bindings (set -o vi), and I think other people want emac bindings.

                                                                                                I may be over-ambituous, but with Elvish my aim is to eventually make the editor programmable enough that users can implement whatever kind of bindings they like. Anyway, using readline as a starter is always a safe choice and you can decide later which route you want to go :)

                                                                                                This whole blog from the author of the bpython shell has a lot of great stuff about terminals.

                                                                                                Thanks! I am reading through http://ballingt.com/blog/ now. FWIW, it is not accurate to say that ballingt is “the author” of bpython; he is one of its more prominent contributors.

                                                                                            2. 2

                                                                                              On a side note, I think that “boiling the ocean” and writing completion scripts is also doable. The man page and help messages of most commands are pretty regular; sure, they are not regular enough for one to write a generic parser for, but given a specific command (e.g. git), writing a parser for its dialect of manpage or help message sounds entirely feasible, especially so if you start from the troff sources of the manpage which has semantic markup. Moreover, for things like GNU utils the help messages are highly regular (they are generated programmatically) that you can virtually look at the source code and derive a CNF to parse it.

                                                                                              After writing several (I suspect 3-6) such parsers, I imagine you should be to identify several parameters that fully describe the dialect, and you can write a parameterized parser to parse virtually every manpage or help message. You can even come up with some heuristics for identifying the parameters automatically (machine learning, anyone?).

                                                                                              A side advantage is that you will also get updates for free. Bash, zsh and fish’s completion scripts all require manual updates when a new flag/subcommand/etc. is added.

                                                                                              1. 1

                                                                                                One thing I learned is that bash-completion already parses the help of ls and other GNU builtins! It does it dynamically, every time you hit tab.

                                                                                                This is in contrast to zsh which tend to bake the logic into scripts (I think in a semi-automated fashion as well). This seems to cause a version skew issue, although I’m not sure how bad it is in practice.


                                                                                                I would be interested in trying to come up with better zsh-style completions to share between Oil, Elvish, and other shells. I talked with someone about that here:

                                                                                                https://news.ycombinator.com/item?id=18061851

                                                                                                I think the general idea you outline might work – it just depends on how much work it is. Bash completion scripts are nasty, but the API is actually quite small! It’s just the compgen/complete/compopt builtins, plus some special variables like COMP_LINE and reading COMPREPLY.

                                                                                                Also, a zsh dev replied here, and he actually said he contributed the bash emulation to zsh ! But it does have the problem I mentioned – bash completion is not up to par with zsh completion!

                                                                                                https://www.reddit.com/r/oilshell/comments/9n7taq/running_bash_completion_scripts_with_osh/

                                                                                                1. 1

                                                                                                  Actually the ZSH dev makes the same argument I would have made – “declarative” doesn’t quite cut it. I think that’s what you mean by “CNF”.

                                                                                                  https://www.reddit.com/r/oilshell/comments/9n7taq/running_bash_completion_scripts_with_osh/

                                                                                                  Declarative will work for common commands. But the thing I really care about is complex commands like git, and that’s where it falls down IMO. The git team is CONSTANTLY updating this script:

                                                                                                  https://github.com/git/git/blob/master/contrib/completion/git-completion.bash

                                                                                                  It has 852 commits from 2006 to 2018. It is already somewhat “declarative” because they invoke git --listcmds rather than duplicating information. But I think the problem is just inherently difficult.

                                                                                                  git is command I need the most help with! I can’t live without the prompt either, so we’re implementing the ugly (but thankfully simple) $PS1 language too.

                                                                                                  1. 1

                                                                                                    Replying to both of your comments here:

                                                                                                    One thing I learned is that bash-completion already parses the help of ls and other GNU builtins! It does it dynamically, every time you hit tab.

                                                                                                    That’s new to me. It is closer to what I think is the most promising approach, i.e. “bespoke parsers” for different commands.

                                                                                                    I would be interested in trying to come up with better zsh-style completions to share between Oil, Elvish, and other shells.

                                                                                                    Yes, yes, yes :)

                                                                                                    Actually the ZSH dev makes the same argument I would have made – “declarative” doesn’t quite cut it. I think that’s what you mean by “CNF”.

                                                                                                    Ah, I typed “CNF” because I jumbled “CFG” (context-free grammar) and “BNF” (Backus Normal Form)…

                                                                                                    I am not sure about your use of “declarative”. What I have in mind is:

                                                                                                    1. A parser reads the output of help message, and generates completion code in $SHELL’s language.

                                                                                                    2. $SHELL runs generated completion code.

                                                                                                    Maybe you are referring to the idea of writing a generic parser so that you only need to give it a few parameters? I do imagine that for complex commands like git some customization may be required and the generic parser may not be sufficient.

                                                                                                    Also, if we are going to have a common intermediate format, I envision the completion pipeline to look like this:

                                                                                                    1. A parser reads the output of help message, and generates this intermediate format.

                                                                                                    2. A converter converts the intermediate format to $SHELL’s language.

                                                                                                    3. $SHELL runs generated completion code.

                                                                                                    Step 1 is what can be shared, and I expect the bulk of heavy lifting to live there. The converter in step 2 needs to be written for each different shell, but they only need to be written once.

                                                                                                    Also, there is a distinction between what I am proposing, and how bash-completion and git’s completion reads help messages at runtime. I am proposing that we do this at build-time; this makes the pipeline much simpler. The downsides are

                                                                                                    • You need to re-build every time a new version of the command comes out to pick up new flags

                                                                                                    • If the completion script is newer than the command in user’s system, the user will see flags that are not actually supported.

                                                                                                    However, I feel these are relatively minor downsides. Maybe we can remedy them by monkey-patching lists of valid flags at runtime, but that obviously leads to duplicate work.

                                                                                                    1. 1

                                                                                                      The git team is CONSTANTLY updating this script

                                                                                                      I have also long suspected that such scripts are so long partially because the language (bash) is not expressive enough. I am now reading https://github.com/git/git/blob/master/contrib/completion/git-prompt.sh and trying to write an Elvish version with feature parity. Hopefully it will be < 1k lines of code :)

                                                                                          1. 6

                                                                                            I am on vacation and thus free from $DAYJOB; I am working on elvish, trying to finish off some major refactoring efforts.

                                                                                            Other than that I am also reading the Perl 6 Language Documentation.