1. 6

    Working on a little toy game. I don’t know why, but it has been a lot of fun.

    Last weekend I wrote a little on-GPU raycaster thing in an attempt to render soft shadows from the game scenery (read: squares and circles). It’s a “stealth” game, so light and shadow are very important, and you can turn lights on/off or shoot them out. Anyway, it worked okay, but there were a lot of quality tradeoffs to get decent performance with multiple lights (I got up to 7 at 60fps). But I had never done graphics or written fragment shaders or anything before so I was pretty proud and pleased with how it looked.

    I showed it to a friend who does Actual Graphics Programming to ask about optimizing it. And he laughed, and held me in his arms, and whispered that it was going to be okay. And then he introduced me to signed distance fields. So yesterday I replaced my little hand-rolled crappy raycaster with something that uses 3D SDFs to cast shadows (even though it’s a 2D game… it makes sense) and the difference is unbelievable. Sharp, pixel-perfect shadows from arbitrary scene geometry, and I can do 17 lights at 60fps. But the quality difference is so large that it doesn’t even seem fair to compare – it’s not shadowing a low-resolution occlusion map; it’s shadowing the actual scene geometry.

    I found this article to be a very good overview of the general technique:


    With interactive demos that explain the general idea behind SDF raymarching.

    (Although I use Aaltonen’s “improved” soft shadows, and I came up with a better technique to mitigate banding (I should email the author about that…). Also I am doing it in 3D, so some objects cast longer shadows than others, but the conversion to 3D is completely trivial… SDFs are crazy.)

    Anyway, this weekend I’m going to integrate the new “lighting engine” into the actual game and then I can stop futzing with lights. And also tell everyone I meet how cool SDFs are. Then I can move on to procedural level generation. I found this paper:


    To be the most compelling procedural level generation technique that I’ve heard of so far. I’m very attached to the idea of generating a level using a graph with production grammars, and then realizing that graph into a game. The “easy” version of this is a rectilinear planar embedding, where each “room” in the level is a square and they’re connected by corridors. But that’s boring. Ma et al.‘s technique is the only interesting example I have seen of this approach. So I’m going to implement something similar, although I have a lot of ideas that I think might improve the final output.

    1. 1

      That sounds like so much fun. SDFs are freaking awesome. Valve used them for the custom sprays in Team Fortress 2, I think. More resources:

      Been struggling with the motivation to actually build something to use them in however, life, pandemic, etc etc…

    1. 2

      A few years ago I made a rough first pass at a website to teach and play Hex, during a time when HexWiki was offline. I never really got it to a usable state, but recently a friend expressed interest in playing Hex-by-mail with me, which inspired me to return to it and get the server working.

      I haven’t written any OCaml since I left my job last year, so I’m looking forward to stretching out those muscles. If you ignore the cert error, you can see the very rough state of it here. Clearly the first thing to do is to get Let’s Encrypt set up. :)

      1. 14

        What’s going on here? How did this get to the top of lobste.rs with 26 upvotes? I’m happy for the OP that they could get their system to work, but as far as I can tell, the story here is “package manager used to manage packages.” We have been doing that for decades. Is there any way the community can get a lever to push back on thin stories like this one?

        1. 25

          Would it change your opinion if the article mentioned that the nix shell being used here is entirely disposable and this process leaves no mark in your OS setup? Also that even if this required some obscure versions of common system dependencies you could drop into such a shell without worrying about version conflicts or messing up your conventional package manager?

          I agree that the article is thin in content, but I don’t think you can write this story off as “package manager used to manage packages.” , I think nix shell is very magical in the package management world.

          1. 6

            I could do that with docker too and it would not leave a trace either

            1. 17

              Yes, but then you’d be inside a container, so you’d have to deal with the complexities of that, like mounting drives, routing network traffic etc. With nix shell, you’re not really isolated, you’re just inside a shell session that has the necessary environment variables that provide just the packages you’ve asked for.

              Aside from the isolation, the nix shell is also much more composable. It can drop you into a shell that simultaneously has a strange Java, python and Erlang environment all compiled with your personal fork of GCC, and you’d just have to specify your GCC as an override for that to happen.

              1. 4

                I get that, but I have to go through the learning curve of nix-shell, while I already know docker, since I need it for my job anyway. I am saying that there are more ways to achieve what the article is talking about. It is fine that the author is happy with their choice of tools, but it is very unremarkable for the title and given how many upvotes that article got.

                1. 5

                  Why not learn nix and then use it at work as well :) Nix knows how to package up a nix-defined environment into a docker container and produce very small images, and you don’t even need docker itself to do that. That’s what we do at work. I’m happy because as far as I’m concerned Nix is all there is and the DevOps folks are also happy because they get their docker images.

                  1. 3

                    I work in a humongous company where the tools and things are less free to choose from atm, so even if I learned nix, it would be a very tough sell..

              2. 3

                As someone who hasn’t used Docker, it would be nice to see what that looks like. I’m curious how the two approaches compare.

                1. 6

                  I think that the key takeaway is that with Docker, you’re actually running a container will a full-blown OS inside. I have a bias against it, which is basically just my opinion, so take it with a grain of salt.

                  I think that once the way to solve the problem of I need to run some specific version of X becomes let’s just virtualize a whole computer and OS because dependency handling is broken anyway, we, as a category simply gave up. It is side-stepping the problem.

                  Now, the approach with Nix is much more elegant. You have fully reproducible dependency graphs, and with nix-shell you can drop yourself in an environment that is suitable for whatever you need to run regardless of dependency conflicts. It is quite neat, and those shells are disposable. You’re not running in a container, you’re not virtualizing the OS, you’re just loading a different dependency graph in your context.

                  See, I don’t use Nix at all because I don’t have these needs, but I played with it and was impressed. I dislike our current approach of just run a container, it feels clunky to me. I think Docker has it’s place, specially in DevOps and stuff, but using it to solve the I need to run Python 2.x and stuff conflicts with my Python 3.x install is not the way I’d like to see our ecosystem going.

                  In the end, from a very high-level, almost stratospheric, point-of-view: both docker and nix-shell workflow will be the developer typing some commands on the terminal, and having what they need running. So from a mechanical standpoint of needing to run something, they’ll both solve the problem. I just don’t like how solving things by doing the evergreen is now the preferred solution.

                  Just be aware that this is an opinion from someone heavily biased against containers. You should play with both of them and decide for yourself.

                  1. 3

                    This comment is a very good description of why I’ve never tried Docker (and – full disclosure – use Nix for things like this).

                    But what I’m really asking – although I didn’t make this explicit – is a comparison of the ergonomics. The original post shows the shell.nix file that does this (although as I point out in another comment, there’s a shell one-liner that gets you the same thing). Is there an equivalent Dockerfile?

                    I was surprised to see Docker brought up at all because my (uninformed) assumption is that making a Docker image would be prohibitively slow or difficult for a one-off like this. I assumed it would be clunky to start a VM just to run a single script with a couple dependencies. But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)

                    1. 4

                      But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)

                      I think containers is a perfectly capable solution to this. The closest thing you can use would probably be toolbox.


                      It would allow you to even provide a standardized environment which would be decoupled from the deployment itself (if that makes sense). It also mount $HOME as well.

                      1. 3

                        I use Nix, but also have experience with Toolbox.

                        I would recommend most people to use Toolbox over nix-shell. With toolbox you can create one-off containers in literally seconds (it’s two commands). After entering the container you can just dnf install whatever you need. Your home directory gets mounted, so you do not have to juggle with volumes, etc. If you need to create the same environment more often, you can create a Dockerfile and build your toolbox containers with podman. The upstream containers that Fedora provides are also just built using Dockerfiles.

                        The post shows a simple use case, but if you want to do something less trivial, it often entails learning Nix the language and nixpkgs (and all its functions, idioms, etc.). And the Nix learning curve is steep (though it is much simpler if you are familiar with functional programming). This makes the toolbox approach orders of magnitude easier for most people - you basically need to know toolbox create and toolbox enter and you can use all the knowledge that you already have.

                        However, a very large shortcoming of toolbox/Dockerfiles/etc. is reproducibility. Sure, you can pass around an image and someone else will have the same environment. But Nix allows you to pin all dependencies plus the derivations (e.g. as a git SHA). You can give someone your Nix flake and they will have exactly the same dependency graph and build environment guaranteed.

                        Another difference is that once you know Nix, it is immensely powerful for defining packages. Nix is a turing-complete functional language, so nixpkgs can provide a lot of powerful abstractions. I dread every time I have to create/modify and RPM spec file, because it is so primitive compared to making a Nix derivation.

                        tl;dr: most people will want to use something like Toolbox, it is familiar and provides many of the same benefits as e.g. nix-shell (isolated, throw-away environments, with your home directory available). However, if you want strong reproduciblity across systems and a more powerful packaging/configuration language, learning Nix is worth it.

                      2. 3

                        A cool aspect of Docker is that it has a gazillion images already built and available for it. So depending on what you need, you’ll find a ready-made image you can put to good use with a single command. If there are no images that fill your exact need, then you’ll probably find an image that is close enough and can be customised. You don’t need to create images from scratch. You can remix what is already available. In terms of ergonomics, it is friendly and easy to use (for these simple cases).

                        So, NixPkgs have a steeper learning curve in comparison to dockerfiles. It might be simpler to just run Docker. What I don’t like is what is happening inside Docker, and how the solution for what looks like simple problems involves running a whole OS.

                        I’m aware that you can have containers without an OS like described in this thread, but that is not something I often see people using in the wild.

                      3. 1

                        Nit-pick: AFAIK one doesn’t really need Alpine or any other distro inside the container. It’s “merely” for convenience. AFAICT it’s entirely possible to e.g. run a Go application in a container without any distro. See e.g. https://www.cloudbees.com/blog/building-minimal-docker-containers-for-go-applications

                  2. 3

                    Let’s assume nix shell is actual magic — like sourcerer level, wave my hand and airplanes become dragons (or vice versa) magic — well this article just demonstrated that immense power by pulling a coin out of a deeply uncomfortable kid’s ear while pulling on her nose.

                    I can’t speak for the previous comment’s author, but those extra details, or indeed any meat on the bones, would definitely help justify this article’s otherwise nonsensical ranking.

                    1. 2

                      Yeah, I agree with your assessment. This article could just as well have the title “MacOS is so fragile, I consider this simple thing to be an issue”. The trouble with demonstrating nix shell’s power is that for all the common cases, you have a variety of ad-hoc solutions. And the truly complex cases appear contrived out of context (see my other comment, which you may or may not consider to be turning airplanes into dragons).

                  3. 19

                    nix is not the first thing most devs would think of when faced with that particular problem, so it’s interesting to see reasons to add it to your toolbox.

                    1. 9

                      Good, as it is not supposed to be the first thing. Learning a fringe system with a new syntax just to do something trivial is not supposed to be the first thing at all.

                    2. 4

                      I find it also baffling that this story has more upvotes than the excellent and original code visualization article currently also very high. Probably some nix up vote ring pushing this

                      1. 12

                        Or folks just like Nix I guess? 🤷

                        1. 11

                          Nix is cool and people like it.

                          1. 5

                            I didn’t think this article was amazing, but I found it more interesting than the code visualization one, which lost me at the first, “From this picture, you can immediately see that X,” and I had to search around the picture for longer than it would have taken me to construct a find command to find the X it was talking about.

                            This article, at least, caused me to say, “Oh, that’s kind of neat, wouldn’t have thought of using that.”

                          2. 6

                            This article is useless. It is way simpler (and the python way) to just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”. No need to write a nix file, and then write a blog post to convince yourself you didn’t waste your time!

                            Considering all nix posts get upvoted regardless of content, it’s about time we have a “nix” tag added to the site.

                            1. 14

                              This article is not useless just because you don’t see its value.

                              I work mainly with Ruby and have to deal with old projects. There are multiple instances where the Ruby way (using a Ruby version manager) did not work because it was unable to install an old Ruby version or gem on my new development machine. Using a nix-shell did the job every time.

                              just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”

                              What do you do if this fails due to some obscure dependency problem?

                              1. 4

                                What do you do if this fails due to some obscure dependency problem?

                                Arguably you solve it by pinning dependency versions in the pip install invocation or requirements.txt, as any Python developer not already using Nix would do.

                                This article is not useless just because you don’t see its value.

                                No, but it is fairly useless because it doesn’t do anything to establish that value, except to the choir.

                                1. 2

                                  In my experience there will be a point where your dependencies will fail due to mismatching OpenSSL, glibc versions and so on. No amount of pinning dependencies will protect you against that. The only way out is to update dependencies and the version of your language. But that would just detract from your goal of getting an old project to run or is straight up impossible.

                                  Enter Nix: You pin the entire environment in which your program will run. In addition you don’t pollute your development machine with different versions of libraries.

                                  1. 3

                                    Arguably that’s just shifting the burden of effort based on a value judgement. If your goal is to get an old project to run while emphasizing the value of incurring zero effort in updating it, then obviously Nix is a solution for you and you’ll instead put the effort into pinning its entire runtime environment. If, however, your value to emphasize is getting the project to run then it may well be a more fruitful choice to put the effort into updating the project.

                                    The article doesn’t talk about any of the hairier details you’re speaking to, it just shows someone taking a slightly out of date Python project and not wanting to put any personal effort into updating it… but updating it by writing a (in this case relatively trivial) Python 3 version and making that publicly available to others would arguably be the “better” solution, at least in terms of the value of contributing back to the community whose work you’re using.

                                    But ultimately my argument isn’t with the idea that Nix is a good solution to a specific problem, it’s that this particular article doesn’t really make that point and certainly doesn’t convincingly demonstrate the value of adding another complex bit of tooling to the toolkit. All the points you’ve raised would certainly help make that argument, but they’re not sadly not present in this particular article.

                                2. 1

                                  Just out of curiosity, I’m also dealing with ancient ruby versions and use nix at work but I couldn’t figure out how to get old enough versions, is there something that helps with that?

                                    1. 1

                                      Thank you, very helpful!

                                      1. 1

                                        Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.

                                        If instead you want an older ruby but linked to newer libraries (eg, OpenSSL) there’s a few extra steps, but this is a great jumping off point to finding derivations to fork.

                                        1. 1

                                          Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.

                                          Plus glibc, OpenSSL and other dependencies with many known vulnerabilities. This is fine for local stuff, but definitely not something you’d want to do for anything that is publicly visible.

                                          Also, note that mixing different nixpkgs versions does not work when an application uses OpenGL, Vulkan, or any GPU-related drivers/libraries. The graphics stack is global state in Nix/NixOS and mixing software with different glibc versions quickly goes awry.

                                    2. 2

                                      This comment mentions having done something similar with older versions by checking out an older version of the nixpkgs repo that had the version of the language that they needed.

                                      1. 2

                                        Like others already said you can just pin nixpkgs. Sometimes there is more work involved. For example this is the current shell.nix for a Ruby on Rails project that wasn’t touched for 5 years. I’m in the process of setting up a reproducible development environment to get development going again. As you can see I have to jump through hoops to get Nokogiri play nicely.

                                        There is also a German blog post with shell.nix examples in case you need inspiration.

                                    3. 4

                                      this example, perhaps. I recently contributed to a python 2 code base and running it locally was very difficult due to c library dependencies. The best I could do at the time was a Dockerfile (which I contributed with my changes) to encapsulate the environment. However, even with the container standpoint, fetching dependencies is still just as nebulous as “just apt install xyz.” Changes to the base image, an ambiently available dependency or simply turning off the distro package manager services for unsupported versions will break the container build. In the nix case, it is sort of forced on the user to spell it out completely what the code needs, combine with flakes and I have a lockfile not only for my python dependencies, but effectively the entire shell environment.

                                      More concretely, at work, the powers to be wanted to deploy python to an old armv7 SoC running on a device. Some of the python code requires c dependencies like openssl, protobuf runtime and other things and it was hard to cross compile this for the target. Yes, for development it works as you describe, you just use a venv, pip install (pipenv, poetry, or whatever as well) and everything is peachy. then comes to deployment:

                                      1. First you need to make a cross-compiled python interpreter, which involves first building the interpreter for your host triple then rebuilding the same source for the target host triple making sure to tell the build process where the host triple build is. This also ignores that some important python interpreter things may not build, like ctypes.
                                      2. Learn every environment variable you need to expose to the setup.py or the n-teenth build / packaging solution for the python project you want to deploy, hope it generates a wheel. We will conveniently ignore how every C depending package may use cmake, or make, or meson, etc, etc…
                                      3. make the wheels available to the image you actually ship.

                                      I was able to crap out a proof-of-concept in a small nix expression that made a shell that ran the python interpreter I wanted with the python dependencies needed on both the host and the target and didn’t even have to think. Nixpkgs even gives you cross compiling capabilities.

                                      1. 1

                                        Your suggested plan is two years out of date, because CPython 2.7 is officially past its end of life and Python 2 packages are generally no longer supported by upstream developers. This is the power of Nix: Old software continues to be available, as if bitrot were extremely delayed.

                                        1. 3

                                          CPython 2.7 is available in debian stable (even testing and sid!), centos and rhel. Even on MacOS it is still the default python, that ships witht he system. I don’t know why you think it is no longer available in any distro other than nix.

                                    1. 44

                                      I’m really surprised by the amount of negativity here. This post shows a problem and a solution. It’s short and sweet and demonstrates a concrete application of Nix.

                                      Saying that you can do this with Docker, you can do this with virtualenv, you can rewrite the entire script in Python 3 – what are those statements adding? The article does not claim this is the only way to do this or this is the single morally correct way. The article says “I had a problem and I solved it like this.” Of course there are other ways to solve the same problem. It’s worth discussing the relative merits of different approaches to the same problem, but remember what that problem is: I found a script on the internet and I want to run it.

                                      Anyway: great first post @maxdeviant; hope to see you around more. If the response here did not just turn you off to lobsters completely.

                                      Starting with the Python scaffold script makes this look, I think, a little more intimidating than it has to be. A shell.nix file is great if you’re sharing this with other people, but for just running a one-off script, you can get by with an even-more-ad-hoc approach:

                                      nix-shell -p graphviz 'python2.withPackages (pkgs: [ pkgs.psycopg2 ])'

                                      Since you don’t need all the rest of that Python scaffolding to run this script.

                                      You could also put the withPackages bit in the script’s shebang, if you were likely to run it multiple times, so you don’t have to remember all of its dependencies later if you come back to it. (Of course, a shell.nix file gets you the same thing.)

                                      #!/usr/bin/env nix-shell
                                      #!nix-shell -i python -p "python2.withPackages (pkgs: [ pkgs.psycopg2 ])"

                                      (Splitting this across two lines is required on macOS, but probably not on Linux. Also for some reason you have to use double quotes; nix-shell -i can’t deal with single quotes.)

                                      It’s also worth noting, maybe, that this doesn’t work with nixos-unstable right now. You can pretty cheaply pin by adding -I nixpkgs=https://nixos.org/channels/nixos-21.05/nixexprs.tar.xz to that nix-shell invocation (er, to the second one, in the shebanged version).

                                      1. 11

                                        Anyway: great first post @maxdeviant; hope to see you around more. If the response here did not just turn you off to lobsters completely.

                                        Thanks, I appreciate the kind words!

                                        And I haven’t been turned off completely. I’ll just remember to post articles with a bit more substance the next time around 😅

                                        In truth, this was primarily a way for me to document this for myself should I need to do something similar again. But I thought it was interesting enough that folks here on Lobsters might enjoy it.

                                        A shell.nix file is great if you’re sharing this with other people, but for just running a one-off script, you can get by with an even-more-ad-hoc approach:

                                        nix-shell -p graphviz 'python2.withPackages (pkgs: [ pkgs.psycopg2 ])'

                                        This would have been a much simpler solution! I sort of figured there was a more elegant way to do this, but the shell.nix from the NixOS Wiki was the first thing that turned up in my search.

                                        1. 10

                                          This post shows a problem and a solution. It’s short and sweet and demonstrates a concrete application of Nix.

                                          Unfortunately, it is a use case where it happens to be easy. In perfect circumstances, Nix can indeed be so simple. However, it paints an overly optimistic picture and is somewhat misleading.

                                          For example, what this post does not tell you is that nixpkgs only packages one version of each Python package (with a very small number of exceptions, such as Tensorflow). There are good reasons for this – Python cannot handle multiple versions in PYTHONPATH and having multiple versions in a transitive closure of dependencies could happen easily. To avoid this issue, nixpkgs only contains one version of each Python package.

                                          If you need a version of a Python package that is not in nixpkgs (which unfortunately happens often, since Python packages rarely conform to semver), then you are off in the woods. You have to package that particular version yourself. If other packages also need the Python package, you need to overlay your package definition and hope/pray that it is compatible with the other derivations in the dependency graph. Or you need to use tools like poetry2nix (hopefully the project uses Poetry) or mach-nix. Sometimes these tools work immediately, but as you can see in the Nix forums they often require a lot of manual overrides, etc. (which in turn requires that you know Nix and nixpkgs pretty well).

                                          If you had instead used a virtual environment + pip, you could have probably gotten the version you need with a single pip install.

                                          Disclaimer: I love Nix and I have contributed hundreds of commits to the nixpkgs tree, but I think it is good to manage expectations.

                                          1. 1

                                            Can’t you simply open up a shell with the desired python version, make a venv, then source activate and pip install?

                                            1. 2

                                              Not with NixOS. It does not have a traditional file system hierarchy, so Python packages that rely on traditional paths (e.g. compiled C modules that expect libc or other dynamic libraries to be in a location that FHS dictates) will not work.

                                              You can get fairly closely by emulating a FHS using e.g. buildFHSUserEnv.

                                        1. 1

                                          Global warming should always cause the Earth’s rotation to slow down due to shift of mass from poles to equator. Whatever is causing the speedup, I don’t think it is global warming.

                                          1. 3

                                            I was curious and googled it. I found a Forbes article which links to this research article. I don’t quite comprehend the language in the research article, but Forbes’ dumbed-down version is that as glaciers melt on and near the poles, there’s less weight on the ground, which makes the ground “rebound” and so the earth circularizes. No doubt the effect you describe also exists, mass trapped in the form of polar ice melts and moves towards the equator, but (according to my understanding of Forbes’ explanation of the paper) that effect is smaller than the counteracting effect from the rising ground.

                                            EDIT: Here’s a non-Forbes source, although it doesn’t contain an explanation of the mechanism: https://phys.org/news/2021-01-earth-faster.html

                                            They [planetary scientists] also have begun wondering if global warming might push the Earth to spin faster as the snow caps and high-altitude snows begin disappearing.

                                            1. 1

                                              In case you’re interested, Hudson Bay is one of the most well-known examples of the “rebound” effect… https://earthobservatory.nasa.gov/images/147405/rebounding-in-hudson-bay.

                                            2. 2

                                              I was also curious about this, as that matches my intuition as well.

                                              This article is about the shift in the axis of rotation, not the speed, but it outlines some unintuitive effects of climate change on the Earth’s distribution of mass:


                                              To summarize:

                                              • The Earth is not round; the poles are flattened, because the weight of the glaciers squishes them down. When they melt, the earth beneath them rise, and the planet becomes more round.
                                              • Most of the melting glaciers are in Greenland, which is 45° from the North Pole, so the mass redistribution is not as clear cut as it seems.
                                              • The mantle might be doing convection stuff that I cannot begin to understand. Cursory research implies this is a recent theory and maybe not totally accepted or understood. (Certainly not by me. I have no idea here.)

                                              Note that the article doesn’t claim any of these affect rotation speed, but they’re interesting factors for my mental model of the changing Earth. Other sources I found cite the first reason (elongating Earth) as the primary driver for the speedup of the Earth’s rotation.

                                              Another possible factor I found is that high-altitude snow and ice is melting, distributing mass closer to the center of the Earth (which, you know, angular momentum or something). It seems incredible to me that that would have a measurable effect – and I didn’t look for evidence that it does – but it’s another interesting thing that I would not have guessed.

                                              So yeah; it’s complicated. Some aspects might slow the Earth in isolation; some might speed it up. Taken together it seems like the consensus is a net speedup.

                                              Obligatory caveat: I probably spent more time writing this comment than actually researching this topic, so this information comes with no warranty express or implied.

                                              (I accidentally replied to mort’s sibling comment, which was posted about the same time; I deleted it and re-posted it here as I had not actually read it at the time I wrote this. Perils of composing in a separate editor and pasting it in.)

                                              1. 1

                                                I have to think that moving mass from the poles to the equator is a 6,000-mile journey away from the axis of rotation, which should overwhelm elevation changes and isostatic rebound. Also, the Hudson Bay area is still rebounding slowly from glaciation, maybe a few feet over ten thousand years. The rocks in Greenland haven’t had time to rebound from the last two decades of melting.

                                            1. 14

                                              I use tmux a lot. I like tmux’s splits and windows and sessions, but the main reason I use it is for keyboard text selection and scrolling. I wrote a little about this here, with a demo of what it looks like: tmux lets you select and copy text with your keyboard. I don’t really remember what life was like without it.

                                              I don’t really get the appeal of smart cd, but I have persistent tmux sessions set up for all the projects I work on, so switching directories is usually just switching my tmux session.

                                              I wrote a little thing to hook up fzf to my shell completion. I have different triggers, like .f for “select file” or .c for “select commit,” and when I type .c<TAB> I get a little fzf popup that lets me select a commit and insert it. I know you can just enable fzf integration with shell completion globally, but this has a few advantages:

                                              • it’s much, much, much faster than shell autocomplete (especially for selecting dirty git files)
                                              • it’s deterministic: when I activate completion, I know exactly what choices I’m going to get – I am not at the mercy of whatever shell autocomplete decides I want to do
                                              • I can add new triggers without having to modify a program’s shell completion (if you have not edited these scripts, they are… truly arcane), and I don’t have to worry about missing completion files for random programs.

                                              And I can still use the default shell completion as much as I want; this just adds another option. I did this for years with global shell aliases, which have a lot of problems, but a few months ago I dove in and figured out how to hook it into regular tab-completion. It’s a little complicated, but it works great.

                                              I really like delta; the goto-next-file-header thing is a brilliant hack. But I had to do a lot of work to get output that I liked.

                                              Having nix installed provides a cool super-power that I use a lot: temporary installation of a program. If I heard of something and want to try it out, I can run nix-shell -p fd (for example) and play around with it. And then it “disappears” as soon as I close that shell, so I don’t have to remember to uninstall it. Lowers the barrier of trying random stuff out.

                                              I made a hacky thing to organize all of the scripts in my ~/bin into a single hierarchical command that’s defined by its directory structure, and that automatically parses bash comments to display help lines in autocomplete. (The ability to easily add new subcommands makes nix’s CLI a lot more palatable…)

                                              ngrok is a really nice niche tool if you want to show your local test server to someone else (I’ve used it for sharing blog drafts, for example). There’s a paid version, mostly to combat spam, I think, but the free version is sufficient for anything I would ever do, and it doesn’t require an account or anything.

                                              terminal-notifier is nice if you’re on a Mac… often I’ll run some long command, ctrl-z, then fg ; terminal-notifier -message "done $?". I actually have an alias for that because I do it often enough.

                                              A little different than the rest, but I use iTerm’s “Hotkey Window” almost exclusively. I’m often tabbing between a web browser and a text editor, and keeping my terminal window out of the ⌘Tab history means I never switch to it by accident because I just happened to run git status.

                                              I have more, but I think that’s enough for now…

                                              1. 2

                                                I’m in a very similar boat regarding persistent tmux sessions, I find this to be an excellent way to maintain structured environments. I just have some simple scripts to set up a session, or attach to it if it already exists.

                                                I love shell completion for the things where it works, but it’s frustrating when it doesn’t. Your .c fzf commit shortcut sounds like something I could really use, for the reasons you state: I far prefer to be explicit about what I’m looking for, so having control over exactly what I’m completing would be perfect. Looking forward to playing with it, thanks!

                                                1. 3

                                                  I agree on the persistent sessions but it really bugs me that tmux conflates two sets of functionality:

                                                  • A pseudoterminal that I can disconnect from and reconnect to.
                                                  • A thing that takes over control of my terminal emulator and provides its own windowing system

                                                  My terminal emulator does a really good job at being a terminal emulator. I don’t need something else to replace that. I’ve switched to using abduco instead of tmux because subsequent versions of tmux made it harder and harder and eventually impossible to tell it not to break my terminal’s scrollback. I’m not completely convinced that abduco is an improvement: it doesn’t seem to properly buffer state when I disconnect and reconnect and it manages to break command-line editing in some terminals.

                                                  I’d love to have a reliable thing that just opened a pty and just forwarded everything to it, buffering when necessary, without any feature creep.

                                                  The macOS Terminal provides an environment variable containing a UUID and if you force-quit it then it restarts with all of the windows in the same position and with the same UUID in that environment variable. I use a wrapper around autossh that drops a file in a known location named with this UUID and containing the remote host name. When the terminal restarts, it automatically reconnects my sessions, so I can reboot my local machine and not lose any state. The Windows terminal and konsole do not preserve the UUID across restarts so I have to run abduco -l on the remote machine after a restart to find the sessions I need to reconnect to. I think the macOS terminal copied this feature from iTerm, I wish other terminal emulators would copy it too.

                                                  1. 2

                                                    I think the “buffering when necessary” would be kinda tricky. Like, you could run some graphical program like vim without switching to an alt screen, and it would have to buffer every intermediate drawing character in order to re-create the final state if you attached a new client. I think? Maybe there’s a smarter thing you could do.

                                                    I am also bothered by tmux – especially the copy mode that I love so much. I wish that I could compose my environment out of smaller building blocks – so a session manager, running a multiplexer, with each “window” running a scrollback-buffering-pty-with-text-selection. Then I could swap out one “layer” for a better alternative without needing to re-write all of tmux. ptys are cheap! I’m currently working on an experiment to write a new “bottom layer” – the text selection bit – as an excuse to learn Rust, and fix some of the ergonomic complaints I have with tmux.

                                                    1. 1

                                                      I am also bothered by tmux – especially the copy mode that I love so much. I wish that I could compose my environment out of smaller building blocks

                                                      Hmmm… tmux was originally written to replace the BSD program window(1) which just performed window management in a terminal. Separate tools could be used for copying and so on,

                                                      1. 1

                                                        Anything like this is doing some buffering: It maintains two connections to PTYs, one that talks to the clients (e.g. vim, ls, whatever) the other that talks to the real terminal emulator (or the next step in the chain if, for example, ssh is between it and the real terminal emulator). It reads from one PTY and writes to the other, sitting in a loop. The only extra thing necessary is that if the terminal disconnects then it needs to store any messages that it gets somewhere (ideally in an in-memory buffer that spills to the disk above a certain size).

                                                  2. 1

                                                    tmux lets you select and copy text with your keyboard.

                                                    I love this feature, or at least until it stopped working for me on sway/wl-clipboard. I think it was a recent tmux update.. My attempt to bisect tmux to find where it broke led to it mysteriously working for about a week, then breaking again. I still have no clue why it doesn’t work for me anymore :(

                                                    1. 1

                                                      tmux 3.3 added the copy-command option which makes it easier to hook into the system clipboard… but I don’t know why that would have broken anything. If you aren’t using a custom copy-pipe (or whatever) command, tmux prints escape codes if it detects that your terminal emulator supports them. Maybe something changed with that detection, or your terminal emulator? You can force it on by adding set -g set-clipboard on to your ~/.tmux.conf. But if you have 3.3 you could try set -g copy-command "wl-copy".

                                                  1. 3

                                                    Ha, I’m amused that my email to you sparked a whole blog post. Thanks for putting the work in.

                                                    Unfortunately I’m just finding that Nix has such a steep learning curve, with so much hidden complexity, that I just don’t want to learn it. The “Basic package management” section of the manual, which comes right after installation, outlines commands that apparently one should never use. This section is about as far as I read before thinking I had a good enough grasp on the tools to “get going” with Nix, and I can see many people doing similarly.

                                                    Some combination of the syntax, the conceptual complexity and the poor documentation is just telling me “This isn’t worth it. Just use Debian stable if you want stability, or Arch if you want cutting-edge.”

                                                    By the way, the reason I want to install Python 3 in my environment generally is that I use it quite often as a shell scripting langage instead of Bash. I don’t manage my ad-hoc shell scripts via Nix; they just sit in ~/.local/bin, so there needs to be a Python in my environemnt.

                                                    1. 3

                                                      I think that your experience is a very common one.

                                                      Some people come to Nix via a mentor – a friend or colleague who understands it, and is able to offer helpful advice for new users. Especially: this is the subset of Nix you should actually use; don’t use nix-env -i or nix-env -u. Don’t read the manual: it is full of lies.

                                                      And some people come on their own, expecting to teach themselves Nix from scratch. This is a much more difficult journey, given the current state of the manual. And while there are hundreds of unofficial resources out there that explain how to use Nix “correctly,” some of which might be quite good, it’s hard to find those or know which are correct or complete or up-to-date. This is further complicated by the fact that many “real” Nix users are using the unstable 2.4 release of Nix, which adds lots of nice features and fixes lots of annoying problems – and many guides assume that you are also using a pre-release Nix. Which you aren’t, if you just installed the official installation instructions, and which you probably don’t want to use and probably don’t even know how to install.

                                                      Unfortunately a lot of this is an emphasis (in the manual) on using Nix like a traditional package manager. But Nix offers very little marginal improvement to that use case, and adds lots of rough edges, as you encountered. The actual value of Nix comes from the things it can do that other package managers cannot: per-project dependencies, ad-hoc environments, conflict-free installation of multiple versions, atomic rollbacks, blah blah blah. These are things you might not care much about until you try them, but they are the reason to put up with all of the complexity and rough edges.

                                                      But it takes a lot of knowledge and practice and frustration before you’re able to benefit from these weird features. And the manual is no help: it doesn’t really emphasize these use-cases, and instead (mostly) treats Nix like just another package manager.

                                                      All of that is to say: Debian and Arch are in fact valid alternatives to that part of Nix. But not to the Nix-unique features. But why would you stick around long enough to find out?

                                                      So, yeah, I’m just rambling now. Thank you for sharing your experience; it makes perfect sense to me why you would not want to continue learning Nix. Hopefully one day soon this will get better, and Nix will be easier to self-teach yourself, and the manual will not lead new users astray.

                                                      1. 2

                                                        Yes, I hope for those things too!

                                                        I really do think Nix has good intentions and yes as you point out, solves a bunch of other non-package-managery-y things. I’m excited to use it down the line, when it is more usable.

                                                    1. 4

                                                      I feel like the author tends to panic and overcomplicate their justifications, rather than relaxing a little and modifying their desires. It is quite acceptable for a request for Python 3 to give the wrong minor version but the right major version, I think. Their prior posts suggest that they have all the knowledge required to incant:

                                                      $ nix-shell -p python38

                                                      This is mentioned in the Python section of the manual:

                                                      It is also possible to refer to specific versions, e.g. python38 refers to CPython 3.8, and pypy refers to the default PyPy interpreter.

                                                      The author is wrong in their claim that this will bite new users; new users are generally asked to use a stable release, where the CPython 3.x version is 3.8.8, and the author is only bitten by this because they’re using an unstable branch.

                                                      1. 2

                                                        I feel like the author tends to panic and overcomplicate their justifications, rather than relaxing a little and modifying their desires.

                                                        My desire is, well, to learn about Nix, and this was a valuable journey for me in learning more about package name disambiguation. I don’t use nix-env -u anymore, because I know that it doesn’t work, and I have no problems using a Nix-installed Python myself.

                                                        (I think there’s a very good argument to be made that package name-based operations are so broken that trying to smooth over papercuts like this is pointless, and we would be much better served by just getting rid of nix-env -u and unqualified nix-env -i altogether. But that ignores that the point of this post is to practice writing overrides :)

                                                        The author is wrong in their claim that this will bite new users

                                                        Well, this bit me when I was a new user, and I wrote this in response to an email from another new user who ran into the same problem (but did not yet understand how to fix it). So it’s so far bitten at least two new Nix users, and I think I would stand by that weaker paraphrased claim. My actual claim was very hyperbolic:

                                                        every single new Nix user is going to run that first thing because that’s the “obvious” command and it’s what the quick start guide told them to run and on and on and on.

                                                        Which I think is pretty easily refutable. (e.g. not all new Nix users will install python3, not all new Nix users read the Nix manual, etc)

                                                        the author is only bitten by this because they’re using an unstable branch

                                                        I’m not sure why you say that. This is the case in the latest stable NixOS channel:

                                                        $ nix-env -i python3 --file https://nixos.org/channels/nixos-21.05/nixexprs.tar.xz
                                                        installing 'python3-3.10.0a5'

                                                        (Although I may have misunderstood what you meant.)

                                                        1. 1

                                                          Your shell snippet is disingenuous; it is not loading the 21.05 release channel. Here’s a REPL session:

                                                          $ nix repl https://nixos.org/channels/nixos-21.05/nixexprs.tar.xz
                                                          Welcome to Nix version 2.3.10. Type :? for help.
                                                          Loading 'https://nixos.org/channels/nixos-21.05/nixexprs.tar.xz'...
                                                          Added 14198 variables.
                                                          nix-repl> python3.version
                                                          nix-repl> python38.version

                                                          My system channel is on 20.09 and retrieves CPython 3.8.8, explaining the discrepancy between my original post and this one. I don’t understand how your channels are configured, but the usage of nix-env is going to continue to confound your experiments.

                                                          1. 2

                                                            Can you explain what I got wrong? I cannot find the error in my shell snippet.

                                                            You are commenting on a post about the name-based disambiguation performed by nix-env -i and nix-env -u. I’m not sure what you mean by “the usage of nix-env is going to continue to confound your experiments.” No other Nix commands operate on package names, as far as I know.

                                                            I would paraphrase the post in this short way: “nix-env -i python3 is different from nix-env -iA nixpkgs.python3.” Your snippet demonstrates the second half of that statement: that nixpkgs.python3 is a stable release of CPython. The original post points this out in multiple places. My snippet in the GP demonstrates the first half: the package “named” python3 (following nix-env’s disambiguation rules) is a (broken) pre-release.

                                                        2. 1

                                                          Why is he using an unstable channel?

                                                          1. 5

                                                            Great question! nixpkgs-unstable is the default channel that you get when you install Nix. Although NixOS by default uses a “stable” release, if you just install nix on a non-NixOS machine, you’re living the unstable life.

                                                            It’s worth noting that the python3 versioning issue is present in the latest stable NixOS release as well; there is nothing in the source article that is dependent on using the (default) unstable channel, despite claims to the contrary.

                                                            1. 1

                                                              Without mind-reading, I can only guess that they wanted it and chose it. I’ve made the same choice before, and often run directly from the master branch of the main git repository, which is even more unstable. But we can’t ignore the choice; by default, new users are given a much more stable release branch which is vetted and tested, and the author chose a different experience.

                                                              1. 1

                                                                I thought maybe it was mentioned somewhere in the series and I just couldn’t find it.

                                                                It might also just be a lack of guidance in the installation documents. Since, if you don’t really know what the channels mean you might think unstable is like using a non-LTS Ubuntu.

                                                          1. 1

                                                            Awesome post, but frightening.

                                                            The code sample after this line “, which means that if you ever hit enter without typing a command:” confuses me, because for me it displays exactly correct. Is that a html issue or am I overlooking something?

                                                            ~/src ➜ echo hi
                                                            ~/src ➜
                                                            ~/src ➜
                                                            ~/src ➜ echo bye
                                                            1. 1

                                                              Er, yeah, this could be more clear in the post.

                                                              The issue is that, on the empty prompt lines, the newline occurs immediately after the arrow. On the lines with actual commands, there is a space after the arrow. The PS1 contains the space, but tmux (or maybe zsh! I don’t know) chomps it off when you hit enter, for some reason.

                                                              1. 1

                                                                yeah that’s what I got from the text, then I selected it and inspected the source to be confused. Overthinking.. but maybe helpful to add some sort of placeholder to show ‘SPACE HERE’ :)

                                                            1. 2

                                                              Funny, I built exactly the same thing using neovim’s built-in terminal emulator instead of tmux, and it suffers from exactly the same issues (except for multiline prompts, which work fine)!

                                                              The only thing I can think of that would allow making this feature without ugly hacks would be to a have terminal escape sequences that would let the shell notify the terminal about the different sections being printed. Making that work would certainly require immense amounts of synchronization between shell and terminal developers though, so I’m not surprised nobody’s done it yet.

                                                              1. 1

                                                                That’s what iTerm does, actually. I haven’t tried it, so I’m not sure how well it works, but it sounds like it can do a bunch of cool stuff:


                                                                I have another idea, which is to write my own shell TUI and run it as a full-screen program, instead of tmux + my shell’s native interactive mode. Run each command within a pty… it’s a bit of a nuclear option, but I’d finally get to make my tab completion work just the way I want it…

                                                              1. 2

                                                                Won’t this result in non-random grouping of results because the 2d positions are static. E.g. If I execute this query once and observe rows A and B near each other in the results, then I would expect to see A and B near each other in another batch of results.

                                                                1. 1

                                                                  Yeah, I really feel like I’m missing something here. It seems you’d need to rebuild the index constantly to shuffle the points. This might be “random enough” for some applications but it’s very very different from the original inefficient query.

                                                                  Choosing three random points and unioning all of the single nearest neighbors to those points might give a uniform distribution, but intuitively that still seems incorrect to me — I would expect bias towards more “central” points. But someone who knows actual math might be able to explain if/why that would be a uniform distribution.

                                                                  It can’t be though, right? A point near a corner is going to show up much less frequently than a point nearest the center. Each point has a probability equal to like the area of the voronoi thingy, which given enough points will be approximately the same size but not actually the same size.

                                                                1. 4

                                                                  I’m going to try writing my first Rust program. A sort of org-babel knockoff, but based around markdown info strings instead of #+BEGIN_SRC because I’m too old to change.

                                                                  This idea started as “re-implement cram, but markdown, so that you can group tests with ##headers and only re-run a subset of tests.” But the scope has sort of ballooned in my head, and I hope to get it to the point that it can ease the writing of my Nix nonsense, which currently involves a lot of copying and pasting from my terminal. Certainly won’t get that far this weekend. But hopefully I can learn what a str is. Baby steps.

                                                                  Also, I built a little lap desk this week out of cork+plywood+sheet steel+leather, to which my keyboard and wrist rests can attach magnetically. (I have a very lightweight split keyboard, and I was sick of it slipping around on my normal lap desk and messing up the spacing/angle, and now it stays exactly where I want it.) But this has opened up a pandora’s box of Fun Magnetic Accessories, so I expect to waste some time this weekend carving a little cable organizer, a mount for a task light, and probably other silly things I don’t really need.

                                                                  1. 9

                                                                    The README doesn’t really describe what makes this different from something like screen or tmux, although it implies that it is.

                                                                    The website describes what seems to be a difference:


                                                                    With an example of a plug-in. But it doesn’t make it clear what the difference is between running a plug-in and running a “full screen” executable.

                                                                    Zellij uses Webassembly and WASI in order to load these panes and give them access to the host machine, so they can be written in any compiled language.

                                                                    That’s the only differentiator I can find, and it’s kind of a strange one, since you can already run programs written in any compiled language natively.

                                                                    I assume that plug-ins have some control over multiple panes/allow some kind of multi-process coordination that distinguishes them, but this isn’t touted as a feature anywhere. So I’m not sure. Is there a provided UI toolkit that makes it very easy to write terminal UIs, maybe?

                                                                    I am happy to see a new terminal multiplexer, but since this project doesn’t bill itself as such, I spent a few minutes trying to understand why that is, assuming I was missing something. But I just came up confused. Declarative pane setup is a nice feature for a terminal multiplexer: although tmux is scriptable imperatively, you need a plug-in if you want the “yaml layout” model. But it’s still a feature for a terminal multiplexer.


                                                                    If anyone from the project is reading this, a little section on “why it’s more than a multiplexer” or “differences between zellij and tmux/screen” (or whatever) would be very helpful for people encountering the project blind like this.

                                                                    1. 4

                                                                      That’s the only differentiator I can find, and it’s kind of a strange one, since you can already run programs written in any compiled language natively.

                                                                      What “pane access” means is that these programs get access to terminal state like scrollback while other programs are running as usual in the terminal. This can be used to extend the terminal to add features like:

                                                                      1. Copy selected terminal text to clipboard as HTML with color codes
                                                                      2. Extract all URLs from scrollback, save to clipboard

                                                                      It sounds very useful and hacker friendly to me. Kitty has had a similar feature: https://sw.kovidgoyal.net/kitty/kittens/custom.html

                                                                      When launching a kitten, kitty will open an overlay window over the current window and optionally pass the contents of the current window/scrollback to the kitten over its STDIN. The kitten can then perform whatever actions it likes, just as a normal terminal program. After execution of the kitten is complete, it has access to the running kitty instance so it can perform arbitrary actions such as closing windows, pasting text, etc.

                                                                      The differentiator here is that you can write Zellij plugins in any language that compiles to WASM, while Kittens are python scripts.

                                                                      1. 2

                                                                        That does sound very useful, but the things you described are also available in tmux and screen. They work by providing commands that output relevant pieces of scrollback or your selection to stdout, so you can compose tmux with standard filters or fzf or something like tmux thumbs (which, despite the name, operates on arbitrary text via stdin). You can do this from within tmux (via keybindings or the :command line), or from outside of a tmux session, by interacting with it as a server (which the tmux executable does by default, if you give it a command).

                                                                        This means you can “script” tmux in whatever language you want – rather than having a plugin architecture, it thinks in terms of stdin/stdout, which feels very hacker friendly to me – although I realize this is a personal preference.

                                                                        (Actually, I’m lying: as far as I know tmux doesn’t have a way to extract the current selection while preserving color escape codes. If you want to preserve escape codes, I think you have to output entire lines of the scrollback buffer. This would make it very annoying to implement (1) in tmux as a simple series of pipes; you’d either need to patch tmux or write a script that queries the scrollback contents and selection indices and do the slicing yourself. Which would maybe end up looking similar to a zellij plugin?)

                                                                        I’m not trying to be a shill for tmux here – I really am happy to see a new terminal multiplexer. The reason I want to dig into it to see what makes it tick is that I’m currently working on my own “terminal workspace” program (which is not quite a multiplexer) and I’m looking for good ideas to steal. :)

                                                                      2. 2

                                                                        […] what makes this different from something like screen or tmux

                                                                        At a superficial layer, having the powerline style cheat “menu” showing shortcuts, and sane shortcut defaults setup out of the box is a big enough difference that a few people at my office have started using zellij regularly where their efforts to get into tmux/screen were tenuous at best.

                                                                        1. 1

                                                                          Does it support copy/paste yet? I don’t see that in the docs or the code so I’m assuming not.

                                                                          1. 1

                                                                            I couldn’t say for sure. In my limited testing of it this far, I’ve only used the terminal emulator for copy/paste.

                                                                      1. 3

                                                                        I prefer tmux to screen also, but in fairness, screen has been able to copy/paste since I started using it, which wasn’t even in the current millennium, so it’s not that much of a differentiator.

                                                                        1. 1

                                                                          Oh yeah, for sure. I definitely didn’t mean to imply that this is an advantage of tmux over screen, or a reason to switch to tmux. A surprising number of people got very defensive about screen on the Other Link-Sharing Site. I just mention that because I was genuinely surprised: the article never mentions screen, and I don’t think that saying nice things about tmux is an implicit put-down of screen, but a lot of people seemed to take it that way. I think 80% of developers I’ve worked with have never used either, and I wanted to encourage them to give it a shot.

                                                                          1. 2

                                                                            Agreed! I guess I took it as implicitly “use this instead of screen” because I assumed everyone is using one or the other, but you’re right, there are probably a lot of people these days who have never heard of either. :)

                                                                        1. 8

                                                                          If anyone else was trying to follow along and couldn’t figure out how to actually copy the selected text, you press return

                                                                          1. 4

                                                                            Oh wow, thanks. I updated the post to include this – I didn’t realize tmux didn’t bind y by default. That’s crazy town.

                                                                          1. 1

                                                                            What I really want with tmux is to go into a mode where I can interactively select one of the lines in my scrollback and then paste it. (Imagine that you’ve just run git status and you want to invoke your editor on one of the files that was listed.) It turns out that the OP wrote a separate blog post showing you how to do something similar, which I can probably adapt to get exactly the behavior I want. Awesome!

                                                                            1. 4

                                                                              You can get tmux to do that:

                                                                              bind -T copy-mode-vi Enter  if -F "#{selection_present}" { send -X copy-selection-and-cancel } { send -X copy-line }

                                                                              That will copy the selection if you have one, or copy the line your cursor is on if you don’t. So you enter copy-mode, move your cursor onto the line, and hit copy. It doesn’t highlight the entire line until you hit copy, but shrug.

                                                                              (This is similar to pressing V and then Enter with the default keybindings.)

                                                                            1. 3

                                                                              Author is clearly enlightened and has EDITOR or VISUAL set to vim but has therefore deluded himself into thinking tmux is of equally respectable comport - he suggests it is possible to manually opt into emacs keybindings for tmux, it that is actually the default unless a vim-compatible value is detected for one of those two variables.

                                                                              (It’s actually clever and I wish more software would detect default keybindings thusly, but obviously the the right choice would be to default to vim-compatible unless emacs were detected!)

                                                                              1. 3

                                                                                Well, the example config I was discussing has an explicit set -g mode-keys vi call, so if you’re starting with that config (as the paragraph in question presumed) you do have to opt into emacs keys. An earlier draft of this post explained tmux’s default behavior, but I decided it was too much noise since “most” people want vi keys and some people don’t set their EDITOR.

                                                                              1. 2

                                                                                It’s interesting how the bottleneck of understanding seemed to be around the concept of relations. I’ve seen this when explaining relational logic to others, and I imagine that I must have experienced it myself at some point.

                                                                                1. 2

                                                                                  Yeah; it was interesting. I was familiar with the concept from my undergrad logic classes – I could have defined relations, going into this. But it was quite a leap to figuring out how to “wield” them in this context. It was hard to shift into that declarative mindset. It was like… I knew what an axe was, but I’d never actually had to fell a tree before. I feel like I must have had similar trouble the first time I tried to write complex SQL queries – how do I write a for loop here? – but I’ve mostly forgotten what that was like.

                                                                                  1. 1

                                                                                    Honestly, really gnarly SQL queries make my head swell in ways a lot of other code doesn’t. Set based thinking can be tricky. (For context, one of the largest stored procs at work is like 8k loc.)

                                                                                    Though this is variously due to the relatively low level of abstraction in SQL and how nasty some queries get.

                                                                                1. 3

                                                                                  I have an idea that I am constantly restarting, but never finishing in any useable state:

                                                                                  Local development reverse proxy that works with *.localhost TLD and allows dynamic registration of services with or without HTTPS support. I need to finally make it work one day in somewhat useable state.

                                                                                  The general concept is simple - provide HTTP Reverse Proxy on port 80 and TCP Reverse Proxy (with potential TLS termination) on port 443, so you can test your applications against “real” domain. Right now I am simulating such workflow via Haproxy, but it is very limited in capabilities (for example no dynamic service registration and needlessly complicated configuration). Maybe next week (I am between two jobs) I will finally try to smash something together.

                                                                                  1. 3

                                                                                    That’s a cool project! In case you hadn’t heard of it, I wanted to mention https://ngrok.com/ – it sounds like an implementation of what you’re describing. It’s not open source anymore – although an early version is? – but it’s free (no signup or anything required) to use in modest capacity. Apparently it rate limits you to 40 connections/minute? I’ve used it for years as a way to share dumb side projects with friends. Super handy.

                                                                                    1. 3

                                                                                      Ngrok serves different purpose, it is meant to be reverse tunnel for exposing locally running services externally, while my idea is to work fully locally.

                                                                                      1. 2

                                                                                        I had tried building out something ngrok-y at one point, and during market research found it. It’s such a good idea, and deserves all the success it can get.

                                                                                      2. 1

                                                                                        Back in the day, I used to use http://pow.cx/manual.html for this when I was working on Rails projects. It’s not exactly what you’re describing, but there might be some inspiration to be had.

                                                                                        1. 1

                                                                                          It is my inspiration as well. I just want to extend it with TLS and use proper domain instead of *.dev.

                                                                                        2. 1

                                                                                          Caddy server does the exact thing you have described as of version 2 - generates & registers a CA to sign certs locally for *.localhost domains, and supports dynamic service registration.

                                                                                          1. 1

                                                                                            I was looking at Caddy 2, but I haven’t found a way to do dynamic registration/deregistration of services. Additionally Caddy cannot work as TCP proxy and will always work as a TLS terminating proxy, which I want to avoid if possible as I want my application to be fully self-contained (so it manages TLS termination and certificate generation via ACME on its own). That is why I am using Haproxy with manual config instead of Caddy right now.

                                                                                          2. 1

                                                                                            Sorry if I’m wrong, but wouldn’t this be as simple as adding your chosen domain name to your /etc/hosts file? I already do this myself, naming my server IP’s so I can easily access them without remembering the whole address.

                                                                                            However, I don’t know if that would work with a *.localhost TLD…

                                                                                            1. 2

                                                                                              There are few points:

                                                                                              • on systemd-enabled systems *.localhost is already resolved to loopback
                                                                                              • it fails when you want to test custom sub domains for example for multi tenant applications (or you need to create new entry for each domain)
                                                                                              • you still need to run some additions proxy/web server to be able to listen on 80/443 instead of always specifying new port (or remembering which port is for which application)
                                                                                              • this still requires me to somehow handle creating and maintaining TLS certificates which I wanted to handle “automatically” within the tool itself

                                                                                              For servers I am running DNS in my lab, so I can access all of them without remembering IPs as well.

                                                                                              1. 2

                                                                                                Alright, a bit more difficult than I imagined.

                                                                                          1. 3

                                                                                            I tried to build a little “IoT” device: a physical knob that would just POST its value (0 to 1) to a little web endpoint whenever you turned the knob. Then anyone could view the value of the knob by going to that website. Mostly for the juvenile wordplay that this enabled (“knob” being a mild vulgarism, at least in the US).

                                                                                            I didn’t know anything about making physical things, so I got something that I thought would make it as “easy” as possible: a Spark Core (since discontinued, google tells me), because I assumed it would be really hard to get Wi-Fi working by myself.

                                                                                            But I didn’t realize, before I bought it, that the only way to load code onto the Spark Core was to use their weird online compiler thing, and the device would actually download the compiled image over Wi-Fi and like flash itself (??). There was no way to just flash from my computer, even though it had a USB thing. That sounds insane, and my memory might be tricking me, but I definitely remember having to use a little web-based “IDE” thing and I could not find a local alternative.

                                                                                            Anyway, if you were writing your own program for this thing – and you didn’t know what you were doing – it was very easy to like lock the device by doing your own I/O in a tight loop, which would prevent you from loading new code onto it – a bug that would prevent you from fixing the bug. So you had to do a hard reset, and for some reason that I can’t remember anymore, that was extremely painful…

                                                                                            Saying this out loud, this doesn’t sound that bad, but I remember being so frustrated by the experience that I haven’t tried to do anything in the physical world since. This was probably… six years ago now? I know that it was really just the fault of the device I was using, and googling this now tells me that its successor does have a way to load code directly onto it (using… Node.js??) so maybe this isn’t so hard anymore.

                                                                                            I’ve always wanted to go back and finish this, because I still giggle at the domain I had registered, but I got so discouraged that I gave away my breadboard/accessories in a recent move. Any pointers, if I were going to pick this back up? (Alternatives that don’t require me to think about Wi-Fi or use JavaScript?)

                                                                                            1. 3

                                                                                              The esp8266 is a reasonable little bugger with just enough connectivity to do this

                                                                                              1. 3

                                                                                                MicroPython might be good match, or Espressif’s SDK is pretty easy to work with if you like C.