1. 3

    I don’t think this proposal makes a useful improvement on SQL personally. But I do desperately want a well-adopted SQL shorthand.

    Here are some of the projects I’m watching in that vein:

    1. 2

      I’ve been looking at Morel (talk here)

      1. 1

        Ah yeah that’s a cool one too. Though as much as I love SML I wouldn’t watch that project in the same way until he spins it out into a standalone thing that can be implemented as query language library for different languages.

        1. 1

          Malloy is as evil-looking to me as the OP proposal is. EdgeDB is pretty interesting now that I look at it but still seems more complicated than SQL.


            Curious: what makes you feel like EdgeQL is more complicated?


              If it’s not fair to say it’s more complicated (since I’m biased by just knowing sql already) it’s at least not less complicated or less verbose than sql, judging from the examples in your tutorial.

              I’m most likely not your audience anyway. The particular thing I’m excited about shorthand sql is for more easily defining application auth policies.


                Tutorial examples are necessarily trivial, so the difference in verbosity is not that visible. It’s when your queries get comparatively more complex, then EdgeQL starts to shine. We’ll post a more elaborate “EdgeQL vs SQL” page to illustrate this better soon. Also, check out this talk by Yury with some good examples [1]

                The particular thing I’m excited about shorthand sql is for more easily defining application auth policies.

                We are working on just that right now [2] (though note it’s an early draft and things will likely change a lot).

                [1] https://youtu.be/CmXB5xqEENs?t=437 [2] https://github.com/edgedb/rfcs/blob/865bc48f4050ced99447bd77a5039f5d34fcb8b2/text/1011-query-rewrite.rst


                  Ah sorry I don’t mean postgres-level policies. If you look at https://www.dbcore.org/ and scroll down to the Authorization section you’ll see declarative policies for API endpoints that are backed by a combination of database queries and injected request variables.

                  I’m not actively working on that anymore and I wasn’t going to write my own SQL shorthand (well I thought about it for a second and decided against it). But I’d still like to see that sort of declarative application-level auth policies happen with a language that can combine SQL queries and injected request context.


                    Ah sorry I don’t mean postgres-level policies

                    Nether did I :-)

                    combine SQL queries and injected request context.

                    This is exactly what we are looking to implement with the “Query rewrite” proposal. First of all, access rules are conditional, and, more importantly you can combine them with globals [1] to get that “request context dependent” filtering.

                    [1] https://github.com/edgedb/rfcs/blob/043acb494ed8e1c4f72dbd0fcf7c00a0b8439624/text/1010-global-vars.rst


                      Gotcha, then I guess I didn’t understand by that first RFC you shared how you are thinking it would be used by application developers.


                        The idea is that you set your access policies in the schema (because, like with everything else you want correctness and migration support). These policies may depend on context variables. For example:

                        type Post {
                            link author -> User;
                            access policy permit read using (.author.id = global current_user_id)

                        Then, in application code (likely in some middleware), you configure the database client to send the appropriate globals:

                        client = client.with_globals(current_user_id = user_id_from_jwt)
                        # Only Posts authored by `current_user_id` are visible
                        result = client.query('SELECT Post')

                        Something like that.


                          Gotcha! Yeah that looks nice.

                          But my desire for a SQL shorthand also extends to wanting to be able to use it selectively. i.e. I just want to write vanilla SQL in my CRUD app for everything except the policies. But again I think I’m not your audience.


                            EdgeQL is a lot more than an SQL shorthand for some things. If you are a heavy SQL user and you like it, but perhaps wish for the query language to be a tad less verbose and a bit more modern, then maybe you’ll find reasons to love EdgeQL too :-)


                              Is there a library to translate EdgeQL into SQL?


                                Well, that’s what EdgeDB does. It takes your schema and your EdgeQL, and produces PostgreSQL queries in a 1:1 fashion. That is, every EdgeQL query is exactly one SQL query regardless of complexity (including things you can’t express in SQL easily, like correlated nested DML). Theoretically you can call the query compiler directly and execute the returned SQL manually, however it’s not clear how that would be of benefit as opposed to using the EdgeDB query I/O.


          Thanks for these links. Lately I’ve been also thinking how I can optimize my work, as I need to write a lot of exploratory SQL queries, involving a lot of joins in a system without any foreign key defined(!). Most of join conditions are fairly static, but there should be more succinct way of expressing joins.

          HTSQL looks quite promising. I was also looking into Cypher and cytosm, but the project looks unmaintained…


            Yeah. I’m watching in vain. And hoping for something to take off…

        1. 24

          This is a pretty strange article. I like and use FreeBSD, but I couldn’t find their own motivation for the change in the text. All these lists are pretty generic – what I was expecting is to see where Linux couldn’t fit in their use case and why FreeBSD does.

          1. 4

            I agree. It’s not very technical either and it would have been nice if there were actual, relevant comparisons.

            Something I’ve seen is actually not really OS-related - not that I know, but for some reason FreeBSD does an excellent job at providing the latest software packages while keeping stability.

            In Linux land you usually have to choose. Do you want an old package, do you want to add some official repository, do you want to have some less stable rolling release thing. And it gets even harder when combining packages (PostgresSQL and PostGIS being a famous example), which is why Docker is much more needed in my opinion.

            On FreeBSD I can say I want Postgres 11 with PostGIS 3.2 and I get it, without self-compiling, without third party packages. Or let’s say I want nginx with certain compile options or third party modules. I can just pkg install it and it works.

            That’s something that feels very strange to me, because one would imagine that there user count really would make a difference and strongly favor Linux. But in reality you have something Debian or RedHat based that’s usually very out of date, unless you add third party repositories which bring in their own problems, or you have something like Arch that is small for official packages, large, but highly unstable with AUR, doesn’t provide configuration options and forces whatever is the latest release onto you. So PostgreSQL 11 with PostGIS 3 isn’t an option.

            This tends to really baffle me. I know it’s part of why Docker picked up, but it feels like an oversized hack for something that obviously can be solved, even with a comparatively small amount of developers. I would really love to see something that compares in Linux world. And no, snap and flatpak aren’t really solutions here.


              It’s also easy in FreeBSD to build a custom package set if you do want to build from source (for example, to enable extra security mitigations that aren’t the default or disable an optional dependency that increases your attack surface but doesn’t add features that you’re using). The tool that builds the entire ports tree to produce packages is open source and it can also be used to create VM disk images that contain freshly-built packages (and, optionally, a freshly built source tree if you want some custom options there) and other things that you’ve built separately. If you’re deploying VMs, then it’s easy to have a ‘git ops’ workflow where pushes to your repo trigger Poudriere to build a new VM image with the latest package versions and so on and to aggressively customise this (for example, excluding bits of the base system that you don’t want, such as the toolchain).

          1. 2

            This is a bit older, from 2021. Looks like this was suggested because it trended on the orange website?

            1. 3

              That’s where I found it yes, but I suggested it because of the content. Pretty good overview of the steps they take to understand where are the leaks.

            1. 4

              Really great approach! The whole post showcases quite pragmatic approach to the problem they had and how they optimized it for. My favorite sentence is at the end:

              The library works for our target platforms, and we don’t wish to take on extra complexity that is of no benefit to us.

              Truly pragmatic approach: it works for us, if you want, fork and modify to oblivion, we give you a starting point. Wish more software is handled like this and doesn’t include all but kitchen sink.

              1. 0

                More and more people are fortunately adopting the suckless philosophy.

                1. 15

                  Unfortunately, the suckless philosophy leads to software that doesn’t do what software is meant to do - alleviate human work by making the machine do it instead. The suckless philosophy of “simplicity” translates to “simplistic software that doesn’t do what you would need it to do, so either you waste time re-implementing features that other people have already written (or would have written), or you do the task by hand instead”.

                  If, when required to choose exactly one of “change the software to reduce user effort” and “make the code prettier”, you consistently choose the latter, you’re probably not a software engineer - you’re an artist, making beautiful code art, for viewing purposes only. You definitely aren’t writing programs for other people to use, whether you think you are or not.

                  This philosophy (and the results) are why I switched away from dmenu to rofi - because rofi provided effort-saving features that were valuable to me out-of-the-box, while dmenu did not. (I used dmenu for probably a year - but it only took me a few minutes with rofi to realize its value and switch) rofi is more valuable as a tool, as a effort-saving device, than dmenu is, or ever will be.

                  In other words - the suckless philosophy, when followed, actively makes computers less useful as tools (because, among other things, computers are communication and collaboration devices, and the suckless philosophy excludes large collaborative projects that meet the needs of many users). This is fine if your purpose is to make art, or to only fulfill your own needs - just make sure that you clearly state to other potential users that your software is not meant for them to make use of.

                  Also note that the most-used (and useful) tools follow exactly the opposite of the suckless philosophy - Firefox, Chromium, Windows, Libreoffice, Emacs, VSCode, Syncthing, qmk, fish, gcc, llvm, rustc/cargo, Blender, Krita, GIMP, Audacity, OBS, VLC, KiCad, JetBrains stuff, and more - not just the ones that are easy to use, but also ones that are hard to use (Emacs, VSCode, Audacity, Blender, JetBrains) but meet everyone’s needs, and are extensible by themselves (as opposed to requiring integration with other simple CLI programs).

                  There’s a reason for this - these programs are more useful to more people than anything suckless-like (or built using the Unix philosophy, which shares many of the same weaknesses). So, if you’re writing software exclusively for yourself, suckless is great - but if you’re writing software for other people (either as an open-source project, or as a commercial tool), it sucks.

                  To top it off, people writing open-source software using the suckless philosophy aren’t contributing to non-suckless projects that are useful to other people - so the rest of the open-source community loses out, too.

                  1. 2

                    It’s fine to disagree with “the suckless philosophy”, but do please try to be nice to people gifting their work as FLOSS.

                    1. 11

                      The tone is not dissimilar from the language on the page it is replying to: https://suckless.org/philosophy/

                      Many (open source) hackers are proud if they achieve large amounts of code, because they believe the more lines of code they’ve written, the more progress they have made. The more progress they have made, the more skilled they are. This is simply a delusion.

                      Most hackers actually don’t care much about code quality. Thus, if they get something working which seems to solve a problem, they stick with it. If this kind of software development is applied to the same source code throughout its entire life-cycle, we’re left with large amounts of code, a totally screwed code structure, and a flawed system design. This is because of a lack of conceptual clarity and integrity in the development process.

                      1. 1

                        I wasn’t actually trying to copy the style of that page, but I guess that I kind of did anyway.

                        @JoachimSchipper (does this work on Lobsters?) My reply wasn’t meant to be unkind to the developer. I’m frustrated, in the same sense that you might be frustrated with a fellow computer scientist who thought that requiring all functions to have a number of lines of code that was divisible by three would improve performance, an idea which is both obviously wrong, and very harmful if spread.

                        …but I wasn’t trying to impart any ill will, malice, or personal attack toward FRIGN, just give my arguments for why the suckless philosophy is counter-productive.

                    2. 1

                      Some people insist on understanding how their tool works.

                      I have a few friends like that - they are profoundly bothered by functionality they don’t understand and go to lengths to avoid using anything they deem too complex to learn.

                      The rest of open source isn’t ‘losing out’ when they work elsewhere - they weren’t going to contribute to a ‘complex’ project anyways, because that’s not what they enjoy doing.

                      1. -4

                        Nice rant

                        1. 5

                          You throw advertising for suckless in the room as the true way, link to your website with the tone described here and then complain about responses ? That doesn’t reflect well on the project.

                          1. 0

                            What are you talking about? There is no one single true way, as it heavily depends on what your goals are (as always). If your goals align with the suckless philosophy, it is useful to apply it, however, if your goals differ, feel free to do whatever you want.

                            I don’t feel like replying exhaustively to posts that are ad hominem, which is why I only (justifiably) marked it as a rant. We can discuss suckless.org’s projects in another context, but here it’s only about the suckless philosophy of simplicity and minimalism. I am actually surprised that people even argue for more software complexity, and SLOC as a measure of achievement is problematic. In no way does the suckless philosophy exclude you from submitting to other projects, and it has nothing to do with pretty code. You can do pretty complex things with well-structured and readable code.

                            1. 4

                              While you may find it insulting, I don’t think the comment is ad hominem at all. It’s not saying that the suckless philosophy is defective because of who created/adopts/argues for it. That would be an attack ad hominem. The comment is saying the suckless philosophy is defective because software that adheres to it is less useful than other software, some of which was then listed in the comment.

                              While it’s (obviously) perfectly fine to feel insulted by a comment that says your philosophy produces less-than-useful software, that does not make the comment either a rant or an ad hominem attack.

                              It’s a valid criticism to say that over-simplifying software creates more human work, and a valid observation that software which creates more human work doesn’t do what software is meant to do. And you dismissed that valid criticism with “Nice rant.” If you don’t feel like responding exhaustively, no one can blame you. But I view “nice rant” as a less-than-constructive contribution to the conversation, and it appears that other participants do as well.

                              1. 2

                                My post was neither a rant, nor ad-hominem. You should be a bit more thoughtful before applying those labels to posts.

                                It’s clearly not an ad-hominem because nowhere in my post did I attack your character or background, or use personal insults. My post exclusively addressed issues with the suckless philosophy, which is not a person, and is definitely not you.

                                It’s also (more controversially) probably not a rant, because it relies on some actual empirical evidence (the list of extremely popular programs that do the opposite of the suckless philosophy, and the lack of popularity among programs that adhere to it), is well-structured, is broken up into several points that you can refute or accept, had parts of it re-written several times to be more logical and avoid the appearance of attacks on your person, and takes care to define terminology and context and not just say things like “suckless is bad” - for instance, the sentence “So, if you’re writing software exclusively for yourself, suckless is great - but if you’re writing software for other people (either as an open-source project, or as a commercial tool), it sucks” specifically describes the purpose for which the philosophy is suboptimal.

                                It also had at least three orders of magnitude more effort put into it than your two-word dismissal.

                            2. 4

                              Rather than labeling it a rant, wouldn’t it be more persuasive to address some of the points raised? Can you offer a counterpoint to the points raised in the first or last paragraphs?

                              1. 2

                                It is definitely a rant, as the points adressed are usually based on the question which norms you apply. Is your goal to serve as many people as possible, or is your goal to provide tools that are easy to combine for those willing to learn how to?

                                In an ideal world computer users would, when we consider the time they spend with them, strive to understand the computer as a tool to be learnt. However, it becomes more and more prevalent that people don’t want to and expect to be supported along the way for everything non-trivial. We all know many cases where OSS developers have quit over this, because the time-demand of responding to half-assed PRs and feature-requests is too high.

                                @fouric’s post went a bit ad-hominem regarding “software engineers” vs. “artists”, but I smiled about it as my post probably hit a nerve somewhere. I would never call myself a “software engineer”.

                                Hundreds of thousands of people are using suckless software, knowingly or unknowingly. Calling us “artists” makes it sound like we were an esolang-community. We aren’t. We are hackers striving for simplicity.

                                I could also go into the security tangent, but that should already be obvious.

                                1. 1

                                  It is definitely a rant, as the points adressed are usually based on the question which norms you apply.

                                  …norms which I addressed - I was very clear to describe what purposes the suckless philosophy is bad at fulfilling. That is - my comment explicitly describes the “norms” it is concerned with, so by your own logic, it is not a rant - and certainly contains more discussion than any of your further replies, who do nothing to refute any of its points.

                                  @fouric’s post went a bit ad-hominem regarding “software engineers” vs. “artists”

                                  Nor is there any ad-hominem contained in the comment, and certainly nothing around my differentiation between “software engineers” and “artists” - nowhere did I make any personal attack on your “character or motivations”, or “appeal to the emotions rather than to logic or reason”, as a quick online search reveals for the phrase “ad-hominem”.

                                  I smiled about it as my post probably hit a nerve somewhere.

                                  If you want to refute any of the points I made, you’re welcome to do it. However, if you just want to try to arouse anger and smile when you do so, I’ll ask you to stop responding to my posts, especially with a two-word dismissal that doesn’t address any of my arguments.

                                  I would never call myself a “software engineer”

                                  …then it should be even more clear that my post was not a personal attack or ad-hominem against you. “You keep using that word…”

                          2. 4

                            How does the suckless philosophy interact with feature flags?

                            1. 2

                              Suckless would say you should only have features you use, and instead of feature flags you have optionally-applied patches

                        1. 22

                          alternatively: branches and commits are cheap… make a new branch, and commit often. then go back and clean things up before submitting a patch or merging to some ‘main’ branch…

                          1. 21

                            Branching and committing often is tedious, which this tool automates.

                            1. 18

                              I kinda disagree. It’s tedious in the same way that organizing your life in general is tedious… you’re moving the cost up front instead of having to deal with the cost of disorganization later

                              1. 21

                                Any advice that reduces to “just use willpower/discipline” is essentially non-advice for many people.

                                1. 1

                                  The tool in the article also needs some commands issued beforehand for it to work, is that willpower/discipline too? I guess it’s useless for most people then.

                                  1. 1

                                    Once it’s set up, it’s set it and forget it. Very different.

                                  2. 1

                                    Okay, but I’m talking about the GP’s concrete advice. There are a lot of small things that you can do that aren’t “willpower” related to improve your organization, including getting and using a planner, putting things in your calendar. If you have a lot of paper, getting a file drawer or switching to a Remarkable and being sure to put documents in the right place. Don’t put all your files on your desktop, use an organizational structure. Close a tab when you have over 5 open or use tab groups. All of these things have very much improved my organization and improved my rather severe ADHD. It’s not about willpower, it’s finding concrete little things that are easy to do and improve the structure of your life.

                                    1. 8

                                      How is using a calendar or a planner not something that requires will power? Do you have any idea how many times and how many different strategies for managing notes/appointments/tasks I tried?

                                      Should have finished reading before commenting. Literally everything you mention requires will power to work. Yes, once habits are formed, it gets easy, but it’s never cost free.

                                      1. 3

                                        including getting and using a planner, putting things in your calendar.

                                        This is the definition of something that requires willpower.

                                        1. 1

                                          I definitely agree that small steps make things that require discipline manageable! I have ADHD too, and my planner, calendar, and todo tracker are invaluable.

                                          But the practices of GTD and Building A Second Brain both support the idea of one big inbox that you sort through later. In this case, dura is the inbox.

                                          Another tool of ADHD management is a notebook that serves as your short-term memory. Dura is the notebook.

                                          1. 2

                                            yes! i have ADHD too, which is a big reason why dura seemed so reasonable to sink a few days into building

                                      2. 3

                                        I’m not sure the comparison is apt. Organizing ones life requires completely different set of skills than switching to terminal and typing some commands to select all updated files and committing them with(out) meaningful message. Also, I’m failing to see how having a tool commit for me, vs I manually committing changes often moves any cost. In the end, I need to clean up a sequence of intermediate commits into a good, submittable change request (here, I’m referring to original poster’s suggestion to commit often).

                                      3. 3

                                        Here is how I do it:

                                        1. Commit everything on a branch dev/andy. Whenever I get something working, i.e. make a test pass, I commit. Then git diff becomes a useful aid – what’s changed since something last worked? – and I can experiment without worrying.

                                        2. Then git rebase -i master when I have a logical piece of work that should land on the main branch.

                                        3. Then ./local.sh git-merge-to-master which does this:

                                        git-merge-to-master() {
                                          local branch=$(git rev-parse --abbrev-ref HEAD)  # find current branch
                                          git checkout master
                                          git merge $branch  # merge my work into master
                                          git push
                                          git checkout $branch  # so I continue working on the branch

                                        Is there an advantage to using this dura tool? git can be tedious but it’s also very easily automated with shell.

                                        1. 5

                                          dura is invisible (as long as you don’t look at the massive number of branches it creates). It makes the commits without touching any existing files (the only exception is it updates the dura-* reference. I’m unsure if your solution would cause friction with existing tools, but I like my backup tools to be reliable. Hard to say if dura is better than your solution, but it seems to me like it is.

                                          The reason I made it is because I forget to commit. Or rather, I don’t want to mess up my pretty lineage of commits so I avoid committing (yes, I know it’s irrational, but it’s what I do). I wanted something 100% automated so that I don’t have to contend with my own psychology.

                                          I see this potentially being installed by IT departments on fresh laptop images. It needs some work to get it to that point, but from a company’s perspective it’s a no-brainer. Save lost time, save money.

                                        2. 1

                                          git branch foo-changes

                                          … then just:

                                          git add . && git commit -m "boop $(date +'%Y%m%d-%H%M')"


                                      1. 8

                                        Each key except h j k l are layered. The mappings are here:


                                        i.e. My w layer will open apps

                                        w+k = open safari (browser) w+l = open vs code (code) w+j = open iTerm (terminal)

                                        My e layer is CMD key

                                        e+k is CMD+k e+w is CMD+w

                                        My . key will insert code fast for me, mostly logging

                                        So in JS mode, .+a will insert console.log()

                                        This is just a glimpse of course, there is 800+ lines of these kind of configs. Most keys are mapped in this way.

                                        If you want to read the config, it uses Goku which has a Tutorial.

                                        1. 2

                                          This is pretty amazing that you built everything in Karabiner. I’ve only dipped my toes in it, and I remap caps-lock to “espace when pressed, hyper when held”.

                                        1. 11

                                          Do we have a tag to filter out web3/blockchain/nft/crypto bullshit and suggest it to posts like these? “web” does not quite cover this…

                                          1. 7

                                            merkle-trees was made exactly for that

                                            1. 2

                                              Well, if someone’s overly specific, that’ll filter out git posts too…

                                              1. 2

                                                That would be an incorrect tag, because discussions related to git should be tagged with vcs - “Git and other version control systems”.

                                          1. 4

                                            Been thinking about standardizing on asdf+direnv. Could anyone offer a quick comparison?

                                            It sounds like Nix can also build your containers for you based on your project definition?

                                            1. 6

                                              asdf works fine for pinning runtimes until you have system libraries, etc that extensions link against which aren’t versioned with asdf. Then you’re back in the same boat as you are with brew, etc where upgrading might break existing workdirs.

                                              1. 3

                                                It sounds like Nix can also build your containers for you based on your project definition?

                                                Yep basically just something like this, lots of assumptions given with this and that you “want” to containerize the hello world program gnu ships but eh its an example:

                                                $ cat default.nix
                                                { pkgs? import <nixpkgs> { system = "x86_64-linux"; } }:
                                                pkgs.dockerTools.buildImage {
                                                  name = "containername";
                                                  config = {
                                                    cmd = [ "${pkgs.hello}/bin/hello" ];
                                                $ nix-build default.nix
                                                # lengthy output omitted
                                                $ docker load < result  
                                                259994eca12e: Loading layer [==================================================>]  34.04MB/34.04MB
                                                Loaded image: containername:zvrzzl5vlbjdbjz8wmy8w4dv905zra1j
                                                $ docker run containername:zvrzzl5vlbjdbjz8wmy8w4dv905zra1j     
                                                Hello, world!

                                                There are caveats to using the docker builds (can’t build on macos) and you’ll need to learn the nix programming language at some point but its a rather droll affair IME once you get that its all just data and functions. And before you ask why is it so big, the short answer is everything that hello defined it depends on is included, which includes jq/pigz/jshon/perl/moreutils etc… for some reason. But its basically lifted straight out of the nix store verbatim.

                                                1. 1

                                                  everything that hello defined it depends on is included, which includes jq/pigz/jshon/perl/moreutils etc… for some reason

                                                  I recognise this list. These are the dependencies used in the shell scripts which build the Docker image. They shouldn’t be included in the image itself.

                                                  1. 2

                                                    They won’t be included in the image if unused.

                                                    1. 2

                                                      Have I just been building docker images wrong then this whole time?

                                                      1. 2

                                                        Yup. Nix is a fantastic way to build docker images. For example https://gitlab.com/kevincox/dontsayit-api/-/blob/46cbc50038dfd3d76fee2e458a4503c646b8ff2c/default.nix#L23-35 (nd older project but good example because it has more than just a single binary) creates an image with:


                                                        Of course if I used musl libc than glibc and its dependencies would go away automatically.

                                                        What’s better is that if you use buildLayeredImage each of these is a separate layer so that rebuilding for example the word list, or the binary doesn’t require rebuilding other layers. (This is actually better than docker itself because docker only supports linear layering, so you would have to decide if the word list or the binary is the top layer and rebuilding the lower would force a rebuild of the higher one.)

                                                2. 2

                                                  It sounds like Nix can also build your containers for you based on your project definition?

                                                  There is also https://nixery.dev/ which allows you to build a container with the necessary tools as easy as just properly naming them. For example:

                                                  docker run -ti nixery.dev/shell/git/htop bash

                                                  Will bring you in a container that has a shell, git, and htop.

                                                1. 5

                                                  Looks like nix-shell is a handy tool to provide project-level reproducibility, and I think it’s much more useful compared to NixOS, which seems to offer workstation-level reproducibility. Most people replace their workstations very infrequently, and when they do, chances are that there exists some migration assistant (e.g. Time Machine or perhaps dd(1)). I don’t think I need to “deploy” my workstation-level configuration anywhere; it’s only meant for me to begin with.

                                                  Tangentially, as a research assistant, I need to share a gigantic computing cluster with everyone affiliated with the university, and I don’t think I can convince the sysadmin into installing Nix on it (especially when the installer is a scary curl -L https://nixos.org/nix/install | sh). I know a root-level /nix directory is required to make use of the binary cache since the absolute path to dynamic libraries is embedded in the cached binaries, but there must be some workaround. Like, why not just scan the relocation table of each cached binary and replace the prefix of the paths?

                                                  1. 12

                                                    I’m currently using NixOS as my main daily driver. The advantage for me isn’t workspace migrations, but workspace versioning. There is a lot of cruft that builds up on workspaces overtime. Packages that you download to try out, work-arounds that stick, etc. These are all versioned & documented in the gitlog. It also lets me do stupid things without having to really worry about the consequences on my machine, as I can rollback the changes easily.

                                                    1. 1

                                                      I tried to configure the whole workspace with Nix on macOS, but it turns out that I cannot even install Firefox. This gives me the impression that nixpkgs is currently not mature/popular enough (at least on macOS), and that at some point I will be forced to install Homebrew/MacPorts/Anaconda or run curl shiny.tool/install.sh | sh to get some niche package, and suddenly I have packages outside of version control.

                                                      Also, nix-env -qaP some_package is already ridiculously slow, and with more packages in the repositroy, it will probably become even slower. More importantly, even a huge package repository cannot include everything, so from time to time users must write Nix expressions themselves, which I don’t think would be trivial (if it was, then Nix would have already automated that).

                                                      I’m not complaining, but that’s the reason I’m not bold enough to use Nix as my daily driver. I guess I should donate some money to Nix to facilitate its growth.

                                                      1. 5

                                                        I don’t disagree with the facts that you wrote, but I thought I’d comment since I cash some of them out differently… For a little context, I first took the dive into NixOS in early 2018, when my Windows desktop’s motherboard flamed out. It was rocky (a mix of Nix, plus it being my first desktop Linux), but I started using nix and nix-darwin when I replaced my macbook air in early 2019.

                                                        I tried to configure the whole workspace with Nix on macOS, but it turns out that I cannot even install Firefox. This gives me the impression that nixpkgs is currently not mature/popular enough (at least on macOS), and that at some point I will be forced to install Homebrew

                                                        1. A linux-first package manager’s ability to manage all software including desktop apps on macOS is a very stringent ruler. (Don’t get me wrong–it’ll be good if it works some day–but it’s a high bar, and Nix can be very useful without clearing it.)

                                                        2. Yes–keep homebrew. TBH, I think forcing desktop apps, many of which already want to auto-update, into the Nix paradigm is a bit weird. I, to be frank, find Nix on macOS, with nix-darwin, to be a very nice compromise over the purity of these apps on NixOS.

                                                          I actually find it more ergonomic to let these apps update as they have new security patches, and update my Nixpkgs channel less frequently. Since early 2019, I think I’ve only twice used Homebrew to install a package–I’ve used it almost exclusively for casks, and the packages were really just to play with something that was in Homebrew to decide if it was worth porting to Nix (neither was). Once again, I’d say the freedom to do this is a really nice compromise over purity in NixOS.

                                                          suddenly I have packages outside of version control.

                                                          You can still version-control a .Brewfile if it is for apps. It’s obviously not the same level of reproducibility, but if I’m trying to rebuild the software I had 3 years ago I’m generally not doing it for Chrome, Firefox, Steam, etc. I added a section to my backup script to barf out a report on whether I have installed anything with brew that isn’t in my brewfile. If I really cared, I think I could script it to uninstall those packages every day to force the issue.

                                                          If it’s for a smaller package and you care about version-controlled reproducibility this much, you’ll generally have enough motivation to port it to Nix. (In my experience it has been true, but I recognize that the practicality of this will depend on the scope of the package in question…)

                                                        More importantly, even a huge package repository cannot include everything, so from time to time users must write Nix expressions themselves, which I don’t think would be trivial

                                                        This is the proverbial two-edged sword. So, yes, yes, this can happen. I am personally very conservative when it comes to recommending Nix to anyone who isn’t open to learning the language. I think it can be okay, narrowly, as a tool with no knowledge of the language. Learning it can be frustrating. But:

                                                        • Nix can be a really big lever. I’m not sure if this is of much use to people who don’t program, but I feel like my time was well spent (even if it was a bigger investment than it needed to be).
                                                        • A lot of the difficulty of learning to write Nix packages has honestly just been my near-complete lack of experience with the processes for going from source to installed software in Unix-alikes. If you already know a lot about this, it’ll be mostly about the language.
                                                        • Packages aren’t all hard to write. They certainly can be nightmarish, but packages for “well-behaved” software can be fairly simple. Consider something like https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/misc/smenu/default.nix which is almost entirely metadata by volume.
                                                        • It is fairly easy to manage/integrate private packages (whether that’s from scratch, or just overrides to use PRs I took the time to submit that the upstream is ignoring). I end up writing a Nix package for most of the unreleased/private software on my system (even when I could use it without doing so).

                                                        (if it was, then Nix would have already automated that).

                                                        Have any other package managers automated the generation of package expressions? I’m not terribly knowledgeable on prior art, here. If so, you’ve probably got a point. IME most of the work of writing package expressions is human knowledge going into understanding the software’s own packaging assumptions and squaring them with Nix. I’d be a little surprised if this is highly automatable. I can imagine translation of existing packages from other languages being more tractable, but IDK.

                                                        1. 2

                                                          A lot of things that are using standard frameworks can be mostly automated. Debian/Ubuntu are doing pretty well with dh. Nix already has templates https://github.com/jonringer/nix-template

                                                          It’s not perfect, but as long as you’re using common rolling, packaging is not terribly hard.

                                                        2. 4

                                                          As to -qaP, unfortunately you learn to not do it; instead I’d recommend to use Nix 2.0’s nix search, as it has some caching (and was in fact introduced primarily to solve the problem with -qaP); and then e.g. nix-env -iA nixpkgs.some_package. Or, alternatively, maybe nix profile install, though I haven’t experimented with it yet myself.

                                                          As to getting some niche package: yes, at some point you’ll either have to do this, or write your own Nix expression to wrap it. FWIW, not every byte in the world is wrapped by Nix, and This Is Just A FOSS Project Run By Good-Willing Volunteers In Their Spare Time, and You Can (Try To) Contribute, and going into another rabbit hole trying to wrap Your Favourite Package™ in a Nix expression is probably something of a rite of passage. I like to draw a parallel between Nix and earlier days of Linux, before Ubuntu, when you had to put a lot of time into it to have some things. A.k.a. your typical day at the bleeding edge of technology (which Nix is actually already not so much bleeding as it was just a few years ago). And, actually, people say Nixpkgs are kinda at the top on https://repology.org. But you know, at least when a package is broken on Nixpkgs, it doesn’t send your whole OS crashing & burning into some glitchy state from which you’ll not recover for the next couple years… because as soon as you manage to get back to some working terminal & disk access (assuming things went really bad on NixOS), or in worst case restart to GRUB, you’re just one nix-rebuild generation away from your last known good state. And with flakes, you nearly certainly even have the source of the last known good generation in a git repo.

                                                          1. 1

                                                            I think the biggest problem is that for Nix to be useful, we must achieve nearly complete package coverage, i.e. almost all packages must be installable via Nix. Covering 90% of the most popular packages is still not good enough, because even a single non-reproducible package in the whole dependency graph will ruin everything. It’s an all-or-nothing deal, and assuming package usage follows a power-law distribution, we will have a very hard time covering the last few bits. This is very different from Linux, where implementing 90% of the functionality makes a pretty useful system.

                                                            Since you mentioned Linux, I’d like to note that packages from the system repository of most distributions are outdated, and users are encouraged to install from source or download the latest release from elsewhere (e.g. on Ubuntu 20.04, you must use the project-specific “PostgreSQL Apt Repository” to install PostgreSQL 13, which was released over a year ago). I guess some people took the effort to package something they want to use, but lack the incentive to keep maintaining it. While it’s perfectly fine to sidestep apt or yum and run make && make install instead, you can never sidestep Nix because otherwise you would lose reproducibility. How can the Nix community keep nearly all packages roughly up to date? I have no clue.

                                                            1. 5

                                                              What I’m trying to answer to this, is to think small and “egoistically”: instead of thinking how Nix is doomed from a “whole world” perspective, try to focus just on your own localised use case and how much reproducibility you need yourself: if you must have 100% reproducibility, it means you either have enough motivation and resources to wrap the (finite number of) specific dependencies that you need and are not yet wrapped, or it’s apparently more of a would like to than must have, i.e. you have other, higher priorities overriding that. If the latter, you can still use Nix for those packages it provides, and add a few custom scripts over that doing a bit of curl+make or whatsit that you’ve been doing all the time till now (though once you learn to write Nix expressions for packages, you may realize they’re not really much different from that). Unless you go full NixOS (which I don’t recommend for starters), your base distro stays the same as it was (you said you don’t have root access anyway, right?) and you still can do all of what you did before. If some parts of your system are reproducible and some are not (yet), is it worse than if none are? Or maybe it is actually an improvement? And with some luck and/or persistence, eventually others may start helping you with wrapping the “last mile” of their personal pet packages (ideally, when they start noticing the benefits, i.e. their Nix-wrapped colleagues’ projects always building successfully and reproducibly and not breaking, and thus them being able to “just focus on the science/whatsit-they’re-paid-for”).

                                                              1. 2

                                                                That makes sense. IMHO Nix can and should convince package authors to wrap their own stuff in Nix. It can because the Nix language is cross-platform (this is not the case for apt/yum/brew/pkg); it should because only authors can make sure the Nix derivations are always up-to-date (with a CI/CD pipeline or something) while minimizing the risk of a supply chain attack.

                                                                1. 4

                                                                  That is not how the open-source movement works. You don’t get to tell people what they should do; you rather take with humbleness and gratitude what they created, try to help them by contributing back (yet humbly enough to be ready to accept and respect if they might not take your contribution for some reason - though knowing you’re also free to fork), and yes, this means to possibly also contribute back ideas, but with the same caveat of them possibly not being taken - in this even more often, given that ideas are a dime a dozen. And notably, through contributing back some high quality code, you might earn some recognition that might give you a tiny bit more attention when sharing ideas. Ah, and/or you can also try to follow up on your ideas with ownership and actions, this tends to have the highest chance of success (though still not 100% guaranteed).

                                                                  That said, I see this thread now as veering off on a tangent from the original topic, and as such I think I will take a break and refrain from contributing to making it a digression train (however I love digressions and however tempting this is), whether I agree or not with any further replies :) thanks, cheers and wish you great Holidays! :)

                                                              2. 2

                                                                I think the biggest problem is that for Nix to be useful, we must achieve nearly complete package coverage, i.e. almost all packages must be installable via Nix.

                                                                Why do you think so? I’m wondering why this applies to Nix but not to Homebrew or apt or yum or the likes? One can still build a package manually by setting up the dependencies in a nix shell – that’s no different from building something that package managers of other systems still don’t have.

                                                                1. 3

                                                                  From my understanding, Nix aims to provide a reproducible environment/build, so it must exhaustively know about every piece of dependency. Homebrew, apt, and yum don’t have such an ambition; they just install packages, and can thus happily co-exist with other package managers and user-installed binaries.

                                                                  1. 6

                                                                    Nix-build, yes; nix-shell, no. In a nix-shell env, you still see all of your pre-existing system, plus what nix-shell provides as an “overlay” (not in docker filesystem sense, just extra entries in PATH etc.). It reproducibly provides you the dependencies you asked it to provide, but it doesn’t guarantee reproducibility of what you do over that. So you could start with nix-shell (or nix develop IIRC in case of flakes).

                                                            2. 3

                                                              This gives me the impression that nixpkgs is currently not mature/popular enough (at least on macOS)

                                                              Have no idea about Mac, but my understanding is that on Linux the amount of packaged stuff for NixOS is just ridiculously high: https://repology.org/repositories/graphs. Anecdotally, everything I need on daily basis is there.

                                                              Still, there are cases when you do want to try a random binary from the internet (happened to me last time when I wanted to try JetBrains Fleet first public release), and yeah, NixOS does make those cases painful.

                                                              1. 1

                                                                Nix on macOS should not even be a thing.

                                                            3. 7

                                                              NixOS […] seems to offer workstation-level reproducibility.

                                                              Don’t forget about server reproducibility! This is especially nice when you need to spin up multiple servers for a single project that need similar configuration.

                                                              1. 2

                                                                In which case I either have docker/kurbernetes/podman or I’m using ansible to be fast and productive. Sure you may get some stuff done much more precisely in nixos, but that’s definitely not worth the hassle. That said: Best of luck to nixos, hopefully it’ll be stable enough one day.

                                                                1. 8

                                                                  Wait, what hassle? And what about it isn’t “stable?” Citation needed? It’s stable enough for every NixOS user I know on server and desktop and it’s stable enough for me. Hassle is a very vague word that could mean a lot of things, so if not for the fact that exactly zero of those possible meanings make what you said a true statement, I wouldn’t know how to respond to this. What is it about NixOS you think is a hassle?

                                                                  1. 8

                                                                    Heh, my previous company went from NixOS to “let’s deploy using ansible on Ubuntu boxes, everyone knows those”. Productivity and velocity just went down the drain. Oh, the horrors, oh the PTSD… But everyone has different experiences, sometimes some tools work, sometimes they don’t.

                                                                    1. 3

                                                                      much more precisely in nixos

                                                                      I don’t know how you could get any more precise than NixOS. It specifies everything by hash, all the way down to a linker. I’ve never seen anybody do anything like that with any other system.

                                                                  2. 5

                                                                    Looks like nix-shell is a handy tool to provide project-level reproducibility,

                                                                    Definitely. Every project I use now has a shell.nix file that pins the tools I use. I switched to that workflow after I was bitten several times by brew replacing python, so virtual environments were not working, or completely forgetting what tools I need in a project (after returning to it a year later). shell.nix is acting both as a bill of materials and recipe for fetching the right tools.

                                                                    1. 4

                                                                      For me that work-around is ‘containers’. As in nix generates the containers (reproducible!), the clusters run the containers, and in the containers it’s /nix.

                                                                      1. 2

                                                                        Check if you can get the following to print “YES”:

                                                                        $ unshare --user --pid echo YES

                                                                        or any of the following to print CONFIG_USER_NS=y (they all check the same thing IIUC, just on various distributions some commands or paths differ):

                                                                        $ zgrep CONFIG_USER_NS /proc/config.gz
                                                                        $ grep CONFIG_USER_NS /boot/config-$(uname -r)

                                                                        If so, there’s reportedly a chance you might be able to install Nix without root permissions.

                                                                        If you manage to get it, personally, I would heartily recommend trying to get into “Nix Flakes” as soon as possible. They’re theoretically still “experimental”, and even harder to find reliable documentation about than Nix itself (which is somewhat infamously not-easy already), but IMO they seem to kinda magically solve a lot of auxillary pain points I had with “classic” Nix.

                                                                        Also, as a side note, the nix-shell stuff was apparently much earlier than NixOS. The original thesis and project was just Nix, and NixOS started later as another guy’s crazy experiment, AFAIU.

                                                                        EDIT: If that fails, you could still try playing with @ac’s experimental https://github.com/andrewchambers/p2pkgs.

                                                                        EDIT 2: Finally, although with much less fancy features, there are still some interesting encapsulation aspects in 0install.net: “0install also has some interesting features not often found in traditional package managers. For example, while it will share libraries whenever possible, it can always install multiple versions of a package in parallel when there are conflicting requirements. Installation is always side-effect-free (each package is unpacked to its own directory and will not touch shared directories such as /usr/bin), making it ideal for use with sandboxing technologies and virtualisation.”

                                                                        1. 1

                                                                          This article did sent me on search for “how do you flakes and nix-shell at the same time” and apparently there’s “nix develop” now. This link has some details for both system wide and local setups: https://www.tweag.io/blog/2020-07-31-nixos-flakes/

                                                                          1. 1

                                                                            Yeah, there’s also https://nixos.wiki/wiki/Flakes, and generally that’s the issue, that you need quite some google-fu to find stuff about them :)

                                                                      1. 3

                                                                        Where to find package names

                                                                        I usually use the web search here: https://search.nixos.org/packages

                                                                        Getting shell.nix set up for projects is really nice, but I jump around a lot between different codebases in different languages where it’s often not worth it (or wanted by the owners) to set it up. The nix-shell command that I use all the time for one-offs:

                                                                        $ nix-shell -p <package-name>

                                                                        …and now I have package-name, and it’s gone when I exit the shell. Well, it’s still in my local nix-store, so it’s almost instant to nix-shell -p it again, but otherwise it isn’t really installed.

                                                                        Recent one in my shell history: nix-shell -p nodejs. Specific node version: nix-shell -p nodejs-10_x. Wanted to try some python script depending on matplotlib on a mac😱 nix-shell -p python3Packages.matplotlib. Even nix-shell -p inkscape works.

                                                                        1. 3

                                                                          I use nix-shell similarly, but never commit shell.nix to the repository. So I get the benefits of having the tools, but I don’t draw ire of other devs :)

                                                                          New nix has the nix search command. It used to be that nix search <term> would do the search, but now (nix 2.4) it’s nix search nixpkgs <term>.

                                                                        1. 1

                                                                          By mistake, I opened the document to realize it’s barely readable on small mobile screens. I say “by mistake”, because the page is hosted on google docs, which I’m trying to avoid as much as possible. The topic is interesting, that’s why instinctively clicked on the link, but couldn’t follow due to the limitations I mentioned earlier.

                                                                          1. 4

                                                                            Sorry, didn’t mean to “trick” you. Self hosted PDF for you: https://uxu.se/Column%20Modules%20for%20Mechanical%20Keyboards%20211223130031.pdf

                                                                            1. 4

                                                                              Thank you for sharing PDF! Sorry for coming of dismissive. As I said, I’m interested in the topic, but it was really hard/impossible to read on my phone, due to wide margins and the amount of indenting (that is a problem of google docs, I’m not sure anyone can fix that).

                                                                          1. 2

                                                                            Pretty nice! Would it be possible to support joins when working on multiple files?

                                                                            1. 1

                                                                              Yep, I just need to hook up multiple arguments to DataStation’s concept of tables. Probably in the next few weeks unless someone else hacks it in sooner!

                                                                            1. 4

                                                                              I might be severely in the wrong here, as I haven’t touched Scala in years, but this reminds me of implicits in Scala. I didn’t like them because they were implicit, and one couldn’t easily figure out where the implicit is coming from. This proposal might be an improvement, as at least one has to write use arena at the beginning, and reuse the same term in the with statement.

                                                                              1. 3

                                                                                I started using Moonlander a year ago and now I’m slowly realizing that what helps me the most isn’t the hardware, but the abilities I get from QMK. I was about to ask if Kinesis is using QMK, but then I checked their website and it seems that the Professional version is using ZMK. ZMK should be quite similar to QMK, does anyone has experience with it?

                                                                                1. 4

                                                                                  I think the ZMK’s raison d’être is Bluetooth support.

                                                                                  1. 1

                                                                                    You can make the old Kinesis work with QMK as well - but it might be a bit on an adventure. :)


                                                                                    1. 1

                                                                                      ZMK is BSD licensed (I think? Not GPL, anyways), so it can be linked to a proprietary vendor Bluetooth driver blob, which is presumably why they chose it.

                                                                                      But, last I heard, it doesn’t support macros, mouse keys, or tap dance, unlike QMK. So that’s a bit too disappointing for me to consider it. I’m sticking with my ergodox.

                                                                                      1. 2

                                                                                        MIT, but I think you’re right regarding the bluetooth driver.

                                                                                    1. 17
                                                                                      • “Imagine if Doctors yelled at their staff as much as Chefs do”
                                                                                      • “Imagine if Programmers spent time outside as much as Gardeners do”
                                                                                      • “Imagine if Car mechanics washed their hands as much as Doctors do”

                                                                                      – Every vocation has their own modes of behaviour and comparing them is just futile.

                                                                                      1. 3

                                                                                        Did you read the article?

                                                                                        1. 5

                                                                                          Yes, it’s a good article, thanks for sharing! Sorry, should’ve been clear I’m showing the disdain for the phrase, not the content of the article.

                                                                                          1. 2

                                                                                            All good! Shoulda thrown a :p into my reply

                                                                                            1. 3

                                                                                              There should be a feature on lobsters that the background of every post is the photo of the person at the moment of writing the comment :)

                                                                                        2. 3

                                                                                          Running a tyrannical kitchen is antiproductive. The food service industry as a whole has terrible practices related to a segregation of status and poor economics; the kind of anger that famous chefs are famous for is just part of a cycle of abuse.

                                                                                          Also, some doctors do yell at their staff, and the result is that people with self-respect and options leave to go work elsewhere.

                                                                                        1. 3

                                                                                          This post is a gist with a diff and not a lot of rationale behind the change, so I’m failing to see the utility of it. Could someone describe what these changes are about and if they are applicable to other situations?

                                                                                          1. 3

                                                                                            Could someone describe what these changes are about and if they are applicable to other situations?

                                                                                            The patch converts conditional branches to conditional move instructions. This can improve performance because conditional moves are cheaper than a mispredicted branch. The changes are generally applicable; Lomuto’s Comeback has a more detailed example of the performance benefits of removing mispredicted branches.

                                                                                          1. 40

                                                                                            I think this, and many other similar cases, spanning back to the Python 2.7 fallout, reveal an interesting divide within the community of Python users.

                                                                                            (This is entirely an observation, not a “but they should support it, if not until the thermal death of the Universe, at least until the Sun is reasonably close to being a Red Dwarf” rant. Also, seriously, 5 years is pretty good. It would’ve been pretty good even before the “move fast and break things” era).

                                                                                            There are, on the one hand, the people who run Python as an application, or at the very least as an operational infrastructure language. They need security updates because running an unpatched Django installation is a really bad idea. Porting their codebase forward is not just necessary, it provides real value.

                                                                                            And then there are the people who run Python for their automated testing framework, for deployment scripts, for project set-up/boilerplate scripts and so on. They are understandably pissed at things like these because their buildiso.py script might as well run on Python 1.4. Porting their codebase forward is a (sometimes substantial!) effort that doesn’t really yield any benefits. Even the security updates are barely relevant in many such environments. Most of the non-security fixes are technically really useless: the bulk of the code has been written X years ago and embeds all the workarounds that were necessary back then, too. Nobody’s going to go back and replace the “clunky workarounds” with a “clean, Pythonic” version, seeing how the code not only works fine but is literally correct and sees zero debugging or further development.

                                                                                            Lots of enterprise people – and this part applies to many things other than Python – still plan for these things like it’s 2001 and their massive orchestration tool is a collection of five POSIX sh scripts and a small C programs written fifteen years ago. That is to say, their maintenance budget has only “bugs” and “feature requests” items, and zero time for “keeping up with the hundreds of open source projects that made it feasible to write our SaaS product with less than 200 people in the first place and which have many other SaaS products to keep alive so they’re not gonna stop for us”.

                                                                                            1. 19

                                                                                              A point you’re passing over (or at least expressing with some incredulity) is that, sometimes software is allowed to be “Done”. You’re allowed to write software that can reach an end state where it is not only feature-complete, but adding features to it would make it worse. IMO, it is not only possible for someone to do that, but it is good, because the current paradigm is inherently unstable in the same way that capitalism’s “Exponential Growth” concept is. The idea that software can be maintained indefinitely is a fallacy that relies on unlimited input resources (time, money, interest, etc), and the idea that new software grows to replace old is outright incorrect when you see people deliberately hunting out old versions of software with specific traits. (Actually, on the whole, features have been lost over time, but that’s a whole different discussion about the fact that the history of software is not taught, with much of it being bound up in long-dead companies and people).

                                                                                              For people maintaining such software, why would it make sense to rewrite large swathes of the codebase, and have a high risk of introducing bugs in the process, many of them probably having already been fixed. Sure, there’s the “security” aspect of it, and there will always need to be minor maintenance needed here and there, but rewriting the code to be compatible with non-EOL platforms not only incurs extra weeks or months (or even years) of effort, but it invalidates all of the testing that you have accumulated against the current codebase, as well.

                                                                                              What made me point this out, is that you seem to regard this form of software as a negative, or as a liability. But at least half of modern software development seems to be what is effectively treading water, all due to bad decisions related to dependencies that are chosen, or the sheer insufferable amount of astractional cost we have and are accumulating as an industry. Personally, I envy the people who get to maintain software where there is now very little to do aside from maintenance work.

                                                                                              1. 10

                                                                                                You’re allowed to write software that can reach an end state where it is not only feature-complete, but adding features to it would make it worse.

                                                                                                Doing this requires you to find a language, compiler and/or interpreter, build toolchain, dev tooling, operating system, etc. all of which must in in the “Done” state with hard guarantees. And while you’re free to build your own “Done” software, the key here is that nobody else is obligated to provide you with “Done” software, so you may have to build your own “Done” stack to get it, or pay the going rate for the shrinking number of people who are fluent in languages which are effectively “Done” because the only work happening in those languages these days is maintenance of half-century-old software systems.

                                                                                                Meanwhile, we already have tons of effectively “Done” software in the wild, in the sense that the people who made it have made a conscious decision to never do any type of upgrades or further work on it, and it’s generally not a good thing – especially when it turns out that, say, a few billion deployed devices worldwide are all vulnerable to a remote code execution bug.

                                                                                                1. 5

                                                                                                  Not every upgrade has to be non backward compatible. Perl5 is a good example.

                                                                                                  1. 6

                                                                                                    Python releases are about as backwards incompatible as Perl releases. I think people just assume every upgrade is bad because of the Python 2 -> 3 upgrade. Worth remembering that realistically nobody tried doing a Perl 5 -> Raku (neé Perl 6) migration.

                                                                                                    1. 2

                                                                                                      That’s probably because Raku was such a long time coming, and marketed from the start (at least as far back as I can remember, not being a Perl coder) as an “apocalypse”. I think nobody expected to be able to migrate, so nobody even bothered trying. Python, on the other hand, had compatibility packages like “six” and AFAIK it was always intended to be doable at least to upgrade from 2 to 3 (and it was, for a lot of code, quite doable). But then when people actually tried, the nitty-gritty details caused so much pain (especially in the early days) that they didn’t want to migrate. And of course essential dependencies lagging behind made it all so much more painful, even if your own pure Python code itself was easy to port it might be undoable to port a library you’re using.

                                                                                                      So I guess it boils down to expectation management :)

                                                                                                  2. 5

                                                                                                    This is actually why standards are useful, they allow code to outlive any actually-existing platform. The code I have from 20 years ago that still builds and runs without changes usually has a single dependency on POSIX. I’m not running it on IRIX or BSD/OS like the author was, but it still works.

                                                                                                    1. 2

                                                                                                      Not necessarily, I think. One might want to do so in cases where there is external, potentially malicious user input. However, in highly regulated environments, where different parties exchange messages and are liable for their correctness, one can keep their tools without upgrading anything for a long time (or at least until the protocol changes significantly). There is simply no business reason to spend time on upgrading any part of the stack.

                                                                                                      1. 2

                                                                                                        Meanwhile, we already have tons of effectively “Done” software in the wild, in the sense that the people who made it have made a conscious decision to never do any type of upgrades or further work on it, and it’s generally not a good thing

                                                                                                        Good way to carefully misrepresent what I was talking about :)

                                                                                                        1. 1

                                                                                                          It’s more that this really is what “Done” means. My own stance, learned the hard way, is that the only way a piece of software can be “Done” is when it’s no longer used by anyone or anything, anywhere. If it’s in use, it isn’t and can’t be “Done”.

                                                                                                          And the fact that most software that approximates a “Done” state is abandonware, and the problems abandonware tends to cause, is the point.

                                                                                                          1. 1

                                                                                                            I disagree that this is what “Done” means, and I disagree with your implied point that this is in any way “inevitable”.

                                                                                                            1. 1

                                                                                                              The idea that software can be maintained indefinitely is a fallacy that relies on unlimited input resources

                                                                                                              That’s what you wrote above. Which implies that your definition of “Done” involves ceasing work on the software at a certain point.

                                                                                                              My point is that generally you get to choose “no further work will be done” or “people will continue to use it”. Not both. You mention people searching for older versions of software – what do you think a lot of communities do with old software? Many of them continue to maintain that software because they need it to stay working for as long as it’s used, which is incompatible with “Done” status.

                                                                                                              1. 1

                                                                                                                And yet if you read a paragraph down from there, you will see

                                                                                                                “For people maintaining such software,”

                                                                                                      2. 6

                                                                                                        A point you’re passing over (or at least expressing with some incredulity) is that, sometimes software is allowed to be “Done”.

                                                                                                        Oh, I’m passing over it because I prefer to open one can of worms at a time :-D.

                                                                                                        But since you’ve opened it, yeah, I’m with you on this one. There’s a huge chunk of our industry which is now subsisting on bikeshedding existing products because, having ran out of useful things to do, it nonetheless needs to do some things in order to keep charging money and to justify its continued existence.

                                                                                                        I don’t think it’s a grand strategy from the top offices, I think it’s a universal affliction that pops up in every enterprise department, sort of like a fungus which grows everywhere, from lowly vegetable gardens to the royal rose gardens, and it’s rooted in self-preservation as much as a narrow vision of growth. Lots of UI shuffling or “improvements” in language standards (cough C++ cough), to name just two offenders, happen simply because without them an entire generation of designers and language consultants and evangelists would find themselves out of a job.

                                                                                                        So a whole bunch of entirely useless changes piggyback on top of a few actually useful things. You still need some useful things, otherwise even adults would call on the emperor’s nakedness and point out that it’s a superfluous (or outright bad) release. But the proportion, erm, varies.

                                                                                                        The impact of this fungus is indeed terrible though. If you were to accrue 20 years’ worth of improvement on top of, say, Windows XP, you’d get the best operating system ever made. But people aren’t exactly ecstatic over Windows 11 because it’s not just 20 years’ worth of improvement, and there’s a lot of real innovation (ASLR, application sandboxing) and real “incremental” improvement (good UTF-8 support) mixed with a whole lot of things that are, at best, useless. So what you get is a really bad pile of brown, sticky material, whose only redeeming feature is that there’s a good OpenVMS-ish kernel underneath and it still runs the applications you need. Even that is getting shaky, though – you can’t help but think that, with so many resources being diverted towards the outer layer, the core is probably getting bad, too.

                                                                                                        Personally, I envy the people who get to maintain software where there is now very little to do aside from maintenance work.

                                                                                                        I was one of them for about two years, let me assure you it is entirely unenviable, precisely because of all that stuff above. Even software that only needs to be maintained doesn’t exist and run in a vacuum, you have to keep it chugging with the rest of world. It may not need any major new features, but it still needs to be taught a new set of workarounds every other systemd release, for example. And, precisely because there’s no substantial growth in it, the resources you get for it get thinner and thinner every year, because growth has to be squeezed out of it somehow.

                                                                                                        Edit: FWIW, this is actually what the last part of my original message was about. As @ketralnis mentioned in their comment here, just keeping existing software up and running is not at all as simple as it’s made out to be, even if you don’t dig yourself into a hole of unreasonable dependencies.

                                                                                                      3. 8

                                                                                                        Lots of enterprise people – and this part applies to many things other than Python – still plan for these things like it’s 2001 and their massive orchestration tool is a collection of five POSIX sh scripts and a small C programs written fifteen years ago.

                                                                                                        The funny thing is that the Python 2.x series was not the bastion of stability and compatibility people like to claim now, as they look back with nostalgia (or possibly just without experience of using Python back then). Idiomatic Python 2.0 and idiomatic Python 2.7 are vastly different languages, and many things that are now well-known and widely-liked/widely-relied-upon features of Python didn’t exist back in 2.0, 2.1, 2.2, etc. And the release notes for the 2.x releases are full of backwards-incompatible changes you had to account for if you were upgrading to newer versions.

                                                                                                        1. 8

                                                                                                          People probably remember Python 2 as the “bastion of stability and compatibility” because Python 2.7 was around and supported for 10 years as the main version of the Python 2 language. Which is pretty “bastion of stability and compatibility”-like. I know that wasn’t the intention when 3.0 was released, but it’s what ended up happening, and people liked it.

                                                                                                          1. 4

                                                                                                            So, obviously, the thing to do is to trick the Python core team into releasing Python 4, so that we get another decade of stability for Python 3.

                                                                                                        2. 5

                                                                                                          Is it a really problem in practice though? If the small tool is actually large enough that porting would take too much time, then there are precompiled releases going back forever, docker images going back to 3.2 at least, and there’s pyenv. That seems like an ok situation to me. Anyone requiring the old version still has it available and just has to think: should I spend effort to upgrade or spend effort to install older version.

                                                                                                          1. 25

                                                                                                            Yes, it really is a problem in practise to try to keep your older unchanging code running. It’s becoming increasingly difficult to opt out of the software update treadmill, even (especially!) on things that don’t ostensibly need updating at all.

                                                                                                            Python 3.6 depends on libssl which depends on glibc, the old version of which isn’t packaged for Ubuntu 16.04. But the security update for glib’s latest sscanf vulnerability that lets remote attackers shoot cream cheese frosting out of your CD-ROM isn’t available on 16.04 and 17 dropped support for your 32-bit processor. And your CD-ROM.

                                                                                                            Sadly you can’t just opt out of the treadmill anymore. The “kids these days” expect constant connectivity, permanent maintenance, and instant minor version upgrades. They leave Github issues on your projects titled “This COBOL parser hasn’t had an update in several minutes, is it abandoned?” They wrap their 4-line scripts with Docker images from dockerhub, and it phones home to their favourite analytics service that crashes if it’s not available. Even if you don’t depend on all of those hosted services (and that’s harder than you think with npm relying on github, apt relying on launchpad, etc), any internet connectivity will drag you in via vital security updates.

                                                                                                            1. 5

                                                                                                              I’m not sure I buy the argument that “kids these days” have anything to do with Ubuntu’s decisions on how long to support their OS, and for what platforms.

                                                                                                              I’d personally jump to one of Debian (for years of few changes), Alpine (if it met my needs) or OpenSUSE Tumbleweed (if rolling was acceptable). I was surprised of the last, but Tumbleweed is actually a pretty solid experience if you’re ok with rolling updates. If not, Debian will cover you for another few years at least.

                                                                                                              If you need an install with a CD drive, maybe https://netboot.xyz/ could be helpful. There are a variety of ways to boot the tool, even an existing grub.

                                                                                                              1. 11

                                                                                                                Maybe it didn’t come through but I mean almost all of that to be hyperbole. The only factual bit is that it really is harder to keep unchanging code running than you’d think, speaking as somebody that spends a lot of time trying to actually do that. It’s easy to “why don’t you just” it, but harder to do in real life.

                                                                                                                Plus the cream cheese frosting. That’s obviously 100% true.

                                                                                                                1. 4

                                                                                                                  Plus the cream cheese frosting. That’s obviously 100% true.

                                                                                                                  In case anyone is wondering, this is really legit! Back in 2015 or so I used to keep an Ubuntu honeypot machine in the office for this precise reason – it was infected with the cream cheese squirting malware and a bunch of crypto miners, which kept the CPU at 100% and, thus, kept the cream cheese hot. It was oddly satisfying to know that the company was basically paying for (part of) my lunch in such a contorted way, as I only had to supply the cream cheese.

                                                                                                              2. 1

                                                                                                                I was asked a while ago to do some minor improvements to a webshop system that had been working mostly fine for the customer. When I looked into it, it turned out to be a whole pile of custom code which was built on an ancient version of CakePHP, which only supported PHP versions up to 5.3. Of course PHP 5 had been deprecated for a while and was slated to be dropped by the (shared) hosting provider they were using.

                                                                                                                So I cautioned that their site would go down pretty soon, and indeed it did. I tried upgrading CakePHP, but eventually got stuck, not only because the code of the webshop was an absolute dumpster fire (without any tests…), but also because CakePHP made so many incompatible changes in a major release (their model layer for db storage was rewritten from scratch, as I understand it) that updating it was basically a rewrite.

                                                                                                                So after several days of heavy coding, I decided that it was basically an impossible task and had to tell the customer that it would be smarter to get the site rebuilt from scratch.

                                                                                                              3. 3

                                                                                                                It depends on how the whole thing is laid out. I’m a little out of my element here but I knew some folks who were wrestling with a humongous Python codebase in the second category and they weren’t exactly happy about how simple it was.

                                                                                                                For example, lots of these codebases see continuous, but low-key development. You have a test suite for like fourty products, spanning five or six firmware versions. You add support for maybe another one every year, and maybe once every couple of years you add a major new feature to the testing framework itself. So it’s not just a matter of deploying a legacy application that’s completely untouched, you also have to support a complete development environment, even if it’s just to add fifty lines of boilerplate and maybe one or two original test cases a year. Thing is, shipping a non-trivial Docker setup that interacts with the outside world a lot to QA automation developers who are not Linux experts is just… not always a very productive affair. It’s not that they can’t use Docker and don’t want to learn it, it’s just that non-trivial setups break a lot, in non-obvious ways, and their end users aren’t always equipped to un-break them.

                                                                                                                There’s also the matter of dependencies. These things have hundreds of them and there’s a surprising amount of impedance matching to do, not only between the “main package” and its dependencies, but also between dependencies and libraries on the host, for example. It really doesn’t help that Python distribution/build/deployment tools are the way they are.

                                                                                                                I guess what I’m saying is it doesn’t have to be a problem in practice, but it is a hole that’s uncannily easy to dig yourself in.

                                                                                                              4. 4

                                                                                                                It’s also hard to change code from any version of Python, just because it’s so permissive (which is part of the appeal, of course). Good luck understanding or refactoring code- especially authored by someone else- without type hinting, tests, or basic error-level checks by a linter (which doesn’t even ship out of the box).

                                                                                                              1. 4

                                                                                                                These tests look interesting, but they are just a bunch of random stuff, essentially. I looked at the lame encoding benchmark, and it shows that BSD is slower. More interesting result would be to answer why it is slower.

                                                                                                                1. 2

                                                                                                                  This looks great! I’m all for database-centric architectures. am a big fan of Materialize (right in the process of introducing it in a project I’m working in) and followed Eve closely when it was still alive. (I created the little bouncing ball example that became a benchmark of sorts towards the end.) Will definitely try to participate in this!

                                                                                                                  1. 2

                                                                                                                    What are Materialize and Eve? I’m puzzled by “little bouncing ball” and relationship with databases — that sounds pretty interesting!

                                                                                                                    1. 4

                                                                                                                      Materialize is a streaming database based on differential dataflow, from Frank McSherry and team. McSherry is well-known for his “Cost that outperforms single-threaded” (COST) paper [PDF].

                                                                                                                      1. 3

                                                                                                                        This is one of my favorite papers from Frank McSherry. The other being “A COOL AND PRACTICAL ALTERNATIVE TO TRADITIONAL HASH TABLES” which is a great intro to Cuckoo hashing, and a great analysis: https://www.ru.is/faculty/ulfar/CuckooHash.pdf

                                                                                                                    1. 16

                                                                                                                      It has been five days since that suggestion was posted and we’re going into what is usually the busiest month of the year for people with regard to social obligations. I think the key ingredient missing here is time.

                                                                                                                      1. 10

                                                                                                                        The reason why I raised this issue is not the last suggestion for the nix tag, but all previous ones, which ended without any reply from the moderators. I get that people are busy, especially this time of year, and sometimes responses come later. But I would appreciate (tag proposal authors, too, probably) to get any kind of response, even “We’re too busy right now, but if you send us a DM in x weeks, will do our best to resolve the matter”.