1. 3

    One suggestion is to rebind the default prefix (C-b) to C-z instead: while C-b moves backwards one character in the default shell config (and hence I use it all the time), C-z backgrounds a job, which I do rarely enough that typing C-z z to do so is perfectly fine.

    1. 5

      I background+foreground jobs pretty frequently. So I need C-z to be free.

      Personally I set my prefix to C-a.
      I think it’s usually used to go to the start of the input line in most shells by default, but I set -o vi in my shell so that doesn’t apply to me.

      A friend of mine sets their prefix to a backtick. Which I thought was interesting, but I like to use backticks now and then…

      1. 7

        C-a is a super common choice, as it’s the same prefix that screen uses by default. The screen folks, in turn, either had the right idea or it was a pretty lucky guess: C-a takes you to the beginning of the line, which is not needed too frequently in a shell.

        On the other hand it’s the “go to the beginning of the line” in Emacs, too so, uhh, I use C-o for tmux. I suppose it might be a good choice for the unenlightened vim users out there :-).

        Another prefix binding that I found to be pretty good is C-t. Normally, it swaps two characters before the cursor, a pretty neat thing to have over really slow connections but also frequently unused. Back when I used Ratpoison, I used it as the Ratpoison prefix key.

        I think C-i (normally that’s a Tab) and C-v (escape the next character) are also pretty good, particularly the former one, since Tab is easily reachable from the home row and available on pretty much any keyboard not from outer space.

        I’ve no idea why I spent so much time thinking about these things, really. Yep, I’m really fun at parties!

        1. 2

          I use C-o for tmux

          Yeah, I’ve used C-o for screen since I guess the mid-90s because I couldn’t deal with C-a being snarfed, possibly because I was using a lot of terminal programs which used emacs keybindings at the time… Now I’m slowly moving over to tmux and keeping the C-o even though I rarely use anything terminal-based these days.

          1. 1

            C-o is pretty important in vim actually. Personally I use M-t, which doesn’t conflict with any vim bindings (vim doesn’t use any alt/meta bindings by default) or any of my i3 bindings (I do use alt for my i3 super key)

            1. 1

              Oh, I had no idea, the Church of Emacs does not let us meddle with the vim simpletons, nor dirty our hands with their sub-par editor :-P. I just googled it and, uh, yeah, it seems important, I stand corrected!

          2. 2

            Personally I set my prefix to C-a. I think it’s usually used to go to the start of the input line in most shells by default …

            And in Emacs; I use it multiple times an hour, so unfortunately that is out for me.

            I think that I have experimented with backtick in screen, after I started using Emacs. I have a vague memory of problems when copy-pasting lines of shell which led me to use C-z instead.

            1. 2

              I’ve used C-a in screen for ages and carried it over tmux. Hitting. C-a a to get to the start of the line is so ingrained now that it trips me up when I’m not using tmux.

          3. 2

            I just use C-a, primarily because I moved to tmux from using screen, which uses that binding for the prefix.

            1. 1

              Yeah I found C-a much better than C-b, much less of a stretch, but eventually I started getting firearm pain in my left arm from all the pinky action. I’ve moved to M-t, most often using right hand for alt and left hand for t.

            2. 1

              C-f

              1. 1

                Unfortunately that is forward-char🙂

                1. 1

                  the lesser of all evil

              2. 1

                I use C-q. It’s extremely easy for me to hit the keys together, since I remap capslock to be control, and is quite comfortable.

                This leads to fun when I accidentally killed my other applications, but at this point it’s ingrained so I don’t mess up.

                1. 1

                  Interesting! I might give that a shot. I do use C-q to quote characters, but not that often, only once or twice every couple of days.

              1. 8

                If someone is able to get root on the host, it’s really game over. If an attacker is somehow able to escape from a container, things don’t look very good either.

                If you do want to take the next step in the security department (not that it’s strictly needed, as you point out), you could consider either non-root containers (pdf page 7) or rootless docker.

                The benefit is that you can keep the server protected even if an attacker gains remote code execution on a single container (pretty common nowadays in the age of log4shell). Containers running as non-root users present a significant hurdle to attackers trying to break out the container since many kernel code paths are not even reachable by unprivileged users. So future zero-day container breakouts like this are much less likely to affect you.

                For rootless mode, containers still run as a user they think is root (UID=0) but that UID is remapped to an unprivileged UID on the host. So an attacker who breaks out of the container ends up becoming an unprivileged user who can’t do much damage.

                I’m selfishly posting this because all the blog posts and stuff on this topic are garbage and the concept is pretty new, so like 99% of Dockerfiles out there run as root for no reason, and I’m hoping more people join the non-root party :)

                1. 3

                  I would switch from Docker to Podman if you want to run rootless containers.

                  1. 2

                    Could you elaborate on the reasons to choose one over the other?

                1. 4

                  For hardware or artificial-scarcity fetishists I guess this is really exciting. For everyone else, I don’t know why anyone would waste donate money for something which we can perfectly emulate on about every platform and architecture you can imagine. These emulators also have the benefit of allowing you to use digital copies of the games instead of increasingly rare and limited cartridges.

                  1. 4

                    I don’t think the “rare and limited” argument holds. There are things like the Everdrive which emulate NES cartridges.

                    This debate is also held for pretty much anything that’s old and collectable: After restoring an old motorcycle, should you ride it or put it in a display case to look at?

                    1. 1

                      It should be possible to make novel hardware cartridges too, although perhaps more difficult to legally sell them for copyright reasons (but you could imagine selling a generic NES cartridge that reads data from a SD card, leaving it to the end user to be the one to violate Nintendo’s copyright by downloading the NES roms and putting them on the SD card).

                      1. 1

                        Krikzz (https://krikzz.com) has made an industry out of this very idea.

                    1. -8

                      the answer is: because it’s Rust

                      1. -1

                        Genuinely though - if it were any other modern language then I wonder if this type of regression would have made it this far. One might not assume anything is wrong because Rust takes so long to build and resolve dependencies even when there is no explicit “bug”.

                        1. 1

                          As you’ve seen yourself it’s a corner case. And the build time is on-par with c++? https://prev.rust-lang.org/en-US/faq.html#why-is-rustc-slow

                          Because rust tries to improve its build times using cranelift. https://github.com/rust-lang/rust/pull/77975

                          And there are so many more answers to that problem (>300 crates for this post, how many generics are used, how many proc-macros…).

                          if it were any other modern language then I wonder if this type of regression would have made it this far

                          It’s a fluke and testing against compile performance is done on https://perf.rust-lang.org/

                          There are even whole ecosystem runs for releases: https://crater-reports.s3.amazonaws.com/pr-87050/full.html to run performance tests. And warp didn’t get caught.

                      1. 3

                        Not a new take on the subject - Seeing stuff with this framing of self-hosted email for years, while me and a couple of friends share a self-hosted postfix server for years that’s handled 100,000s (maybe millions?) of mails without any serious issues, and honestly not even much ongoing maintenance…

                        1. 5

                          Looks like nix-shell is a handy tool to provide project-level reproducibility, and I think it’s much more useful compared to NixOS, which seems to offer workstation-level reproducibility. Most people replace their workstations very infrequently, and when they do, chances are that there exists some migration assistant (e.g. Time Machine or perhaps dd(1)). I don’t think I need to “deploy” my workstation-level configuration anywhere; it’s only meant for me to begin with.

                          Tangentially, as a research assistant, I need to share a gigantic computing cluster with everyone affiliated with the university, and I don’t think I can convince the sysadmin into installing Nix on it (especially when the installer is a scary curl -L https://nixos.org/nix/install | sh). I know a root-level /nix directory is required to make use of the binary cache since the absolute path to dynamic libraries is embedded in the cached binaries, but there must be some workaround. Like, why not just scan the relocation table of each cached binary and replace the prefix of the paths?

                          1. 12

                            I’m currently using NixOS as my main daily driver. The advantage for me isn’t workspace migrations, but workspace versioning. There is a lot of cruft that builds up on workspaces overtime. Packages that you download to try out, work-arounds that stick, etc. These are all versioned & documented in the gitlog. It also lets me do stupid things without having to really worry about the consequences on my machine, as I can rollback the changes easily.

                            1. 1

                              I tried to configure the whole workspace with Nix on macOS, but it turns out that I cannot even install Firefox. This gives me the impression that nixpkgs is currently not mature/popular enough (at least on macOS), and that at some point I will be forced to install Homebrew/MacPorts/Anaconda or run curl shiny.tool/install.sh | sh to get some niche package, and suddenly I have packages outside of version control.

                              Also, nix-env -qaP some_package is already ridiculously slow, and with more packages in the repositroy, it will probably become even slower. More importantly, even a huge package repository cannot include everything, so from time to time users must write Nix expressions themselves, which I don’t think would be trivial (if it was, then Nix would have already automated that).

                              I’m not complaining, but that’s the reason I’m not bold enough to use Nix as my daily driver. I guess I should donate some money to Nix to facilitate its growth.

                              1. 5

                                I don’t disagree with the facts that you wrote, but I thought I’d comment since I cash some of them out differently… For a little context, I first took the dive into NixOS in early 2018, when my Windows desktop’s motherboard flamed out. It was rocky (a mix of Nix, plus it being my first desktop Linux), but I started using nix and nix-darwin when I replaced my macbook air in early 2019.

                                I tried to configure the whole workspace with Nix on macOS, but it turns out that I cannot even install Firefox. This gives me the impression that nixpkgs is currently not mature/popular enough (at least on macOS), and that at some point I will be forced to install Homebrew

                                1. A linux-first package manager’s ability to manage all software including desktop apps on macOS is a very stringent ruler. (Don’t get me wrong–it’ll be good if it works some day–but it’s a high bar, and Nix can be very useful without clearing it.)

                                2. Yes–keep homebrew. TBH, I think forcing desktop apps, many of which already want to auto-update, into the Nix paradigm is a bit weird. I, to be frank, find Nix on macOS, with nix-darwin, to be a very nice compromise over the purity of these apps on NixOS.

                                  I actually find it more ergonomic to let these apps update as they have new security patches, and update my Nixpkgs channel less frequently. Since early 2019, I think I’ve only twice used Homebrew to install a package–I’ve used it almost exclusively for casks, and the packages were really just to play with something that was in Homebrew to decide if it was worth porting to Nix (neither was). Once again, I’d say the freedom to do this is a really nice compromise over purity in NixOS.

                                  suddenly I have packages outside of version control.

                                  You can still version-control a .Brewfile if it is for apps. It’s obviously not the same level of reproducibility, but if I’m trying to rebuild the software I had 3 years ago I’m generally not doing it for Chrome, Firefox, Steam, etc. I added a section to my backup script to barf out a report on whether I have installed anything with brew that isn’t in my brewfile. If I really cared, I think I could script it to uninstall those packages every day to force the issue.

                                  If it’s for a smaller package and you care about version-controlled reproducibility this much, you’ll generally have enough motivation to port it to Nix. (In my experience it has been true, but I recognize that the practicality of this will depend on the scope of the package in question…)

                                More importantly, even a huge package repository cannot include everything, so from time to time users must write Nix expressions themselves, which I don’t think would be trivial

                                This is the proverbial two-edged sword. So, yes, yes, this can happen. I am personally very conservative when it comes to recommending Nix to anyone who isn’t open to learning the language. I think it can be okay, narrowly, as a tool with no knowledge of the language. Learning it can be frustrating. But:

                                • Nix can be a really big lever. I’m not sure if this is of much use to people who don’t program, but I feel like my time was well spent (even if it was a bigger investment than it needed to be).
                                • A lot of the difficulty of learning to write Nix packages has honestly just been my near-complete lack of experience with the processes for going from source to installed software in Unix-alikes. If you already know a lot about this, it’ll be mostly about the language.
                                • Packages aren’t all hard to write. They certainly can be nightmarish, but packages for “well-behaved” software can be fairly simple. Consider something like https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/misc/smenu/default.nix which is almost entirely metadata by volume.
                                • It is fairly easy to manage/integrate private packages (whether that’s from scratch, or just overrides to use PRs I took the time to submit that the upstream is ignoring). I end up writing a Nix package for most of the unreleased/private software on my system (even when I could use it without doing so).

                                (if it was, then Nix would have already automated that).

                                Have any other package managers automated the generation of package expressions? I’m not terribly knowledgeable on prior art, here. If so, you’ve probably got a point. IME most of the work of writing package expressions is human knowledge going into understanding the software’s own packaging assumptions and squaring them with Nix. I’d be a little surprised if this is highly automatable. I can imagine translation of existing packages from other languages being more tractable, but IDK.

                                1. 2

                                  A lot of things that are using standard frameworks can be mostly automated. Debian/Ubuntu are doing pretty well with dh. Nix already has templates https://github.com/jonringer/nix-template

                                  It’s not perfect, but as long as you’re using common rolling, packaging is not terribly hard.

                                2. 4

                                  As to -qaP, unfortunately you learn to not do it; instead I’d recommend to use Nix 2.0’s nix search, as it has some caching (and was in fact introduced primarily to solve the problem with -qaP); and then e.g. nix-env -iA nixpkgs.some_package. Or, alternatively, maybe nix profile install, though I haven’t experimented with it yet myself.

                                  As to getting some niche package: yes, at some point you’ll either have to do this, or write your own Nix expression to wrap it. FWIW, not every byte in the world is wrapped by Nix, and This Is Just A FOSS Project Run By Good-Willing Volunteers In Their Spare Time, and You Can (Try To) Contribute, and going into another rabbit hole trying to wrap Your Favourite Package™ in a Nix expression is probably something of a rite of passage. I like to draw a parallel between Nix and earlier days of Linux, before Ubuntu, when you had to put a lot of time into it to have some things. A.k.a. your typical day at the bleeding edge of technology (which Nix is actually already not so much bleeding as it was just a few years ago). And, actually, people say Nixpkgs are kinda at the top on https://repology.org. But you know, at least when a package is broken on Nixpkgs, it doesn’t send your whole OS crashing & burning into some glitchy state from which you’ll not recover for the next couple years… because as soon as you manage to get back to some working terminal & disk access (assuming things went really bad on NixOS), or in worst case restart to GRUB, you’re just one nix-rebuild generation away from your last known good state. And with flakes, you nearly certainly even have the source of the last known good generation in a git repo.

                                  1. 1

                                    I think the biggest problem is that for Nix to be useful, we must achieve nearly complete package coverage, i.e. almost all packages must be installable via Nix. Covering 90% of the most popular packages is still not good enough, because even a single non-reproducible package in the whole dependency graph will ruin everything. It’s an all-or-nothing deal, and assuming package usage follows a power-law distribution, we will have a very hard time covering the last few bits. This is very different from Linux, where implementing 90% of the functionality makes a pretty useful system.

                                    Since you mentioned Linux, I’d like to note that packages from the system repository of most distributions are outdated, and users are encouraged to install from source or download the latest release from elsewhere (e.g. on Ubuntu 20.04, you must use the project-specific “PostgreSQL Apt Repository” to install PostgreSQL 13, which was released over a year ago). I guess some people took the effort to package something they want to use, but lack the incentive to keep maintaining it. While it’s perfectly fine to sidestep apt or yum and run make && make install instead, you can never sidestep Nix because otherwise you would lose reproducibility. How can the Nix community keep nearly all packages roughly up to date? I have no clue.

                                    1. 5

                                      What I’m trying to answer to this, is to think small and “egoistically”: instead of thinking how Nix is doomed from a “whole world” perspective, try to focus just on your own localised use case and how much reproducibility you need yourself: if you must have 100% reproducibility, it means you either have enough motivation and resources to wrap the (finite number of) specific dependencies that you need and are not yet wrapped, or it’s apparently more of a would like to than must have, i.e. you have other, higher priorities overriding that. If the latter, you can still use Nix for those packages it provides, and add a few custom scripts over that doing a bit of curl+make or whatsit that you’ve been doing all the time till now (though once you learn to write Nix expressions for packages, you may realize they’re not really much different from that). Unless you go full NixOS (which I don’t recommend for starters), your base distro stays the same as it was (you said you don’t have root access anyway, right?) and you still can do all of what you did before. If some parts of your system are reproducible and some are not (yet), is it worse than if none are? Or maybe it is actually an improvement? And with some luck and/or persistence, eventually others may start helping you with wrapping the “last mile” of their personal pet packages (ideally, when they start noticing the benefits, i.e. their Nix-wrapped colleagues’ projects always building successfully and reproducibly and not breaking, and thus them being able to “just focus on the science/whatsit-they’re-paid-for”).

                                      1. 2

                                        That makes sense. IMHO Nix can and should convince package authors to wrap their own stuff in Nix. It can because the Nix language is cross-platform (this is not the case for apt/yum/brew/pkg); it should because only authors can make sure the Nix derivations are always up-to-date (with a CI/CD pipeline or something) while minimizing the risk of a supply chain attack.

                                        1. 4

                                          That is not how the open-source movement works. You don’t get to tell people what they should do; you rather take with humbleness and gratitude what they created, try to help them by contributing back (yet humbly enough to be ready to accept and respect if they might not take your contribution for some reason - though knowing you’re also free to fork), and yes, this means to possibly also contribute back ideas, but with the same caveat of them possibly not being taken - in this even more often, given that ideas are a dime a dozen. And notably, through contributing back some high quality code, you might earn some recognition that might give you a tiny bit more attention when sharing ideas. Ah, and/or you can also try to follow up on your ideas with ownership and actions, this tends to have the highest chance of success (though still not 100% guaranteed).

                                          That said, I see this thread now as veering off on a tangent from the original topic, and as such I think I will take a break and refrain from contributing to making it a digression train (however I love digressions and however tempting this is), whether I agree or not with any further replies :) thanks, cheers and wish you great Holidays! :)

                                      2. 2

                                        I think the biggest problem is that for Nix to be useful, we must achieve nearly complete package coverage, i.e. almost all packages must be installable via Nix.

                                        Why do you think so? I’m wondering why this applies to Nix but not to Homebrew or apt or yum or the likes? One can still build a package manually by setting up the dependencies in a nix shell – that’s no different from building something that package managers of other systems still don’t have.

                                        1. 3

                                          From my understanding, Nix aims to provide a reproducible environment/build, so it must exhaustively know about every piece of dependency. Homebrew, apt, and yum don’t have such an ambition; they just install packages, and can thus happily co-exist with other package managers and user-installed binaries.

                                          1. 6

                                            Nix-build, yes; nix-shell, no. In a nix-shell env, you still see all of your pre-existing system, plus what nix-shell provides as an “overlay” (not in docker filesystem sense, just extra entries in PATH etc.). It reproducibly provides you the dependencies you asked it to provide, but it doesn’t guarantee reproducibility of what you do over that. So you could start with nix-shell (or nix develop IIRC in case of flakes).

                                    2. 3

                                      This gives me the impression that nixpkgs is currently not mature/popular enough (at least on macOS)

                                      Have no idea about Mac, but my understanding is that on Linux the amount of packaged stuff for NixOS is just ridiculously high: https://repology.org/repositories/graphs. Anecdotally, everything I need on daily basis is there.

                                      Still, there are cases when you do want to try a random binary from the internet (happened to me last time when I wanted to try JetBrains Fleet first public release), and yeah, NixOS does make those cases painful.

                                      1. 1

                                        Nix on macOS should not even be a thing.

                                    3. 7

                                      NixOS […] seems to offer workstation-level reproducibility.

                                      Don’t forget about server reproducibility! This is especially nice when you need to spin up multiple servers for a single project that need similar configuration.

                                      1. 2

                                        In which case I either have docker/kurbernetes/podman or I’m using ansible to be fast and productive. Sure you may get some stuff done much more precisely in nixos, but that’s definitely not worth the hassle. That said: Best of luck to nixos, hopefully it’ll be stable enough one day.

                                        1. 8

                                          Wait, what hassle? And what about it isn’t “stable?” Citation needed? It’s stable enough for every NixOS user I know on server and desktop and it’s stable enough for me. Hassle is a very vague word that could mean a lot of things, so if not for the fact that exactly zero of those possible meanings make what you said a true statement, I wouldn’t know how to respond to this. What is it about NixOS you think is a hassle?

                                          1. 8

                                            Heh, my previous company went from NixOS to “let’s deploy using ansible on Ubuntu boxes, everyone knows those”. Productivity and velocity just went down the drain. Oh, the horrors, oh the PTSD… But everyone has different experiences, sometimes some tools work, sometimes they don’t.

                                            1. 3

                                              much more precisely in nixos

                                              I don’t know how you could get any more precise than NixOS. It specifies everything by hash, all the way down to a linker. I’ve never seen anybody do anything like that with any other system.

                                          2. 5

                                            Looks like nix-shell is a handy tool to provide project-level reproducibility,

                                            Definitely. Every project I use now has a shell.nix file that pins the tools I use. I switched to that workflow after I was bitten several times by brew replacing python, so virtual environments were not working, or completely forgetting what tools I need in a project (after returning to it a year later). shell.nix is acting both as a bill of materials and recipe for fetching the right tools.

                                            1. 4

                                              For me that work-around is ‘containers’. As in nix generates the containers (reproducible!), the clusters run the containers, and in the containers it’s /nix.

                                              1. 2

                                                Check if you can get the following to print “YES”:

                                                $ unshare --user --pid echo YES
                                                YES
                                                

                                                or any of the following to print CONFIG_USER_NS=y (they all check the same thing IIUC, just on various distributions some commands or paths differ):

                                                $ zgrep CONFIG_USER_NS /proc/config.gz
                                                CONFIG_USER_NS=y
                                                $ grep CONFIG_USER_NS /boot/config-$(uname -r)
                                                CONFIG_USER_NS=y
                                                

                                                If so, there’s reportedly a chance you might be able to install Nix without root permissions.

                                                If you manage to get it, personally, I would heartily recommend trying to get into “Nix Flakes” as soon as possible. They’re theoretically still “experimental”, and even harder to find reliable documentation about than Nix itself (which is somewhat infamously not-easy already), but IMO they seem to kinda magically solve a lot of auxillary pain points I had with “classic” Nix.

                                                Also, as a side note, the nix-shell stuff was apparently much earlier than NixOS. The original thesis and project was just Nix, and NixOS started later as another guy’s crazy experiment, AFAIU.

                                                EDIT: If that fails, you could still try playing with @ac’s experimental https://github.com/andrewchambers/p2pkgs.

                                                EDIT 2: Finally, although with much less fancy features, there are still some interesting encapsulation aspects in 0install.net: “0install also has some interesting features not often found in traditional package managers. For example, while it will share libraries whenever possible, it can always install multiple versions of a package in parallel when there are conflicting requirements. Installation is always side-effect-free (each package is unpacked to its own directory and will not touch shared directories such as /usr/bin), making it ideal for use with sandboxing technologies and virtualisation.”

                                                1. 1

                                                  This article did sent me on search for “how do you flakes and nix-shell at the same time” and apparently there’s “nix develop” now. This link has some details for both system wide and local setups: https://www.tweag.io/blog/2020-07-31-nixos-flakes/

                                                  1. 1

                                                    Yeah, there’s also https://nixos.wiki/wiki/Flakes, and generally that’s the issue, that you need quite some google-fu to find stuff about them :)

                                              1. 8

                                                I’m happy to see a story on this, but wish they would mention that there was a time when everything was “self-hosted”.

                                                1. 9

                                                  Was there? In the early days of the Internet, the only things connected to it were time-sharing machines, so you’d have an account on one of those, which would host things. Prior to the Internet, you’d have BBSs. In both cases, you’d connect to someone else’s computer to do things. In the early days of the public Internet, ISPs provided web and email hosting and most people use those (Hotmail was disruptive in a large part because it was a way of getting an email address that wasn’t tied to your ISP). I’m not sure there was ever a time when everything was self-hosted.

                                                  1. 3

                                                    I think a lot of self-hosters run services that are used by family and friends. Calling it “self-hosting” might be a misnomer, but it’s close enough.

                                                    And that’s not too different from each site (e.g. university or company) providing time-sharing machines with mail and .plan files for their 100-1000 users or so.

                                                    1. 3

                                                      From roughly 1995 to 2005, I hosted my various sites and services on computers that were physically owned and configured by people I knew personally, administered by us jointly. Owning my own server hardware was too cost-prohibitive for me back then, but I think it still counts as self-hosting in every meaningful way.

                                                      I realize that I was far, far ahead of the curve, especially as compared to other penniless teenagers. My experience was a very rare one. Nonetheless, it was available for some people.

                                                      1. 2

                                                        Oh, I agree it was around for a long time - when I went to university the computer society had a machine (the successor to the one on which the Linux TCP stack was developed) that hosted email and web pages for everyone. I became an admin on that and used it to host most of my stuff until I could afford a colocated Mac Mini running OpenBSD, which I later replaced with a VPS and moved between a variety of different providers.

                                                        I was only disagreeing with the ‘there was a time when everything was “self-hosted”.’ in @emery’s post.

                                                        1. 2

                                                          Makes sense. Yeah, there used to be a lot more much smaller websites, and a lot more community around them, but that’s not the same as everything being self-hosted, and it’s worth keeping the distinction in mind.

                                                      2. 1

                                                        Prior to the Internet, you’d have BBSs. In both cases, you’d connect to someone else’s computer to do things.

                                                        “someone else’s computer”. not either A.) Google’s or B.) Facebook’s computers.

                                                        1. 6

                                                          Instead it was AOL and your ISP’s.

                                                          1. 1

                                                            What are you talking about? In the days of dial-up BBSs people were running the software on their home PCs…

                                                            1. 1

                                                              Even if you consider the established/notable BBS this is a very different picture than we have today, or even your alleged “AOL and your ISP’s [sic]” theory.

                                                              https://en.wikipedia.org/wiki/List_of_bulletin_board_systems

                                                              1. 2

                                                                I think it’s very telling everyone abandoned BBSes immediately when something better came along.

                                                                1. 1

                                                                  Okay, but that’s a different argument.

                                                                  1. 1

                                                                    Okay, but that’s a different argument.

                                                        2. 5

                                                          I only started “self-hosting” (debatable, since the host is a VPS on Digital Ocean) 5 years ago, when the friend who was hosting my stuff had to stop due to workplace changes. Before then I’ve always been running on other servers or hosts (I’ve had a web presence since 1994).

                                                          There is no freaking way I am gonna go through the hassle of actually hosting physical servers in my home with all the work that entails. Not for a web page and some measly CGI scripts.

                                                        1. 4

                                                          Sometimes I really think that Python should have stayed a teaching language.

                                                          1. 5

                                                            In my opinion, reproducible-builds would be a more generally applicable & useful tag. Enabling reproducible builds is the main attribute of interest that Nix/Guix/Tvix have in common. “Nix” just happens to just be the frontrunner, and need not be the namesake. repro-builds could be used instead if something shorter is desired.

                                                            However, after reading the comments, I agree that nix is the most mature suggestion. Perhaps the tag can be updated down the line if it turns out Nix isn’t the only “Nix-like” sparking discussion here on Lobste.rs.

                                                            1. 5

                                                              the term “reproducible builds” means different things to different people almost to the point that it’s completely devoid of usefulness.

                                                              1. 4

                                                                For as long as I’ve been involved in NixOS I thought that we were avoiding the term “reproducible-builds” and using “deterministic-builds” to avoid making the vague promises that come with “reproducibility”.

                                                                1. 1

                                                                  Ah, thanks for the clarification!

                                                              2. 2

                                                                Sidebar: It’s true these share other traits, such as a declarative configuration language. However, these are just common solutions to the same problem. Another system using a different method would also belong in this category.

                                                                1. 2

                                                                  While I understand your approach, I want to suggest something more fundamental which they have in common, but which e.g. reproducible Debian does not: Nix and friends treat package references as capabilities, so that we get a sort of “package-capability” system. Just like how object-capability systems prevent certain kinds of unauthorized behavior at a structural level, Nix prevents certain kinds of hygiene problems in upstream packages from becoming security issues for downstream deployments.

                                                                1. 3

                                                                  This is so great. The bus factor on the Nix tool is ~1, and this has been causing lots of issues.

                                                                  Is this meant to be flakes-aware?

                                                                  Also, what’s it being implemented in? If there’s any code available, I couldn’t see it - navigating the SourceGraph-based forge on mobile was an exercise in frustration.

                                                                  1. 6

                                                                    Is this meant to be flakes-aware?

                                                                    We’re not planning to support experimental features for now beyond what is required for nixpkgs, but note that experiments like flakes can also be implemented in the Nix language - you don’t need tooling support for them.

                                                                    Also, what’s it being implemented in?

                                                                    The evaluator is in Rust, store/builder are undecided.

                                                                    navigating the SourceGraph-based forge on mobile was an exercise in frustration

                                                                    There’s also a cgit instance at https://code.tvl.fyi - we have a search service that takes a cookie for redirecting to the preferred code browser.

                                                                    1. 1

                                                                      I think it’s a shame that Flakes are still considered experimental.

                                                                      1. 3

                                                                        I disagree, but opinions are split on this :-)

                                                                        1. 4

                                                                          what are the problems you see with flakes?

                                                                    2. 1

                                                                      I think I recall reading on Matrix earlier that whatever current source is available is from the ~failed fork attempt, and that the reimplementation would likely be in rust?

                                                                      (For that matter, I also saw the suggestion that they may be over-estimating how readily ~compatible their effort may be with guix–though I don’t recall for sure if that was on Matrix or orange site.)

                                                                      1. 2

                                                                        how readily ~compatible their effort may be with guix

                                                                        You’re probably referring to this thread. The thing is that we don’t explicitly intend to be compatible with the derivation files, but the fundamental principles of Guix (language evaluation leads to some sort of format for build instructions) has - to our knowledge - not diverged too much to be conceptually compatible.

                                                                        Note that we haven’t really sat down with anyone from Guix to discuss this yet, for now our efforts are focused on Nix (and personally I believe that a functional, lazy language is the right abstraction for this kind of problem).

                                                                    1. 3

                                                                      +1 for nix-family or just nix

                                                                      1. 6

                                                                        Even though I believe OpenPGP is dead (and dangerous), I’m willing to give credit where credit is due and recognize that Sequoia PGP doesn’t completely suck. sq(1) is significantly easier to use than gpg(1).

                                                                        1. 3

                                                                          Why do you believe it’s dead (and “dangerous”)?

                                                                            1. 3

                                                                              It’s probably a good idea to link also to this: https://articles.59.ca/doku.php?id=pgpfan:tpp

                                                                            2. 6

                                                                              The cryptographic community is moving away from PGP, for a host of reasons.

                                                                              1. The UX of GPG is a nightmare.
                                                                              2. As a result, the GPG documentation is incredibly lengthly. The gpg(1) manpage alone is 56+ pages, while age(1) is 1+. The OpenPGP specification spans several RFCs, while the age is a single Google doc.
                                                                              3. Until just recently, GnuPG only used weak KDFs, and it still defaults to KDF_ITERSALTED_S2K. age(1) has always used scrypt.
                                                                              4. GnuPG’s MDC authentication is broken. age(1) ciphertext is authenticated with ChaCha20-Poly1305.
                                                                              5. OpenPGP keys are not forward secret. There is an RFC draft (another one!), but it’s not without problems. However, age(1) keys are also not forward secret.
                                                                              6. OpenPGP defines and supports old primitives, including RSA, DSA, MD5, SHA-1, IDEA, 3DES, CAST5, El Gamal, RIPEMD160, and many more. age(1) is highly opinionated, supporting only modern primitives of X25519, HKDF-SHA256, HMAC-SHA256, scrypt, RSAES-OAEP_SHA256, and the system CSPRNG.
                                                                              7. OpenPGP leaks metadata. age(1) only leaks its version format and public keys of recipients.
                                                                              8. The SKS keyservers are vulnerable to a plethora of vulnerabilities, thus killing the Web of Trust, killing the PGP Strong Set, and ultimately GnuPG ignoring all signatures coming from a keyserver. age(1) doesn’t and will not support signing or key distribution.
                                                                              9. The goal of PGP was to protect email, and it turns out, email can’t be effectively protected. Email is fundamentally unsecurable. age(1) is strictly for encrypting files to other people (or yourself) using their (or your) public key, and that’s it.

                                                                              GPG is fine for file encryption, although not without its warts and footguns. It really doesn’t offer you anything you can’t already do with age(1) for file encryption and ssh-keygen(1), signify(1), or minisign(1) for certificates, signatures, and verification.

                                                                              1. 6

                                                                                Sounds to me like mostly problems with GPG, not so much with OpenPGP.

                                                                                Hence Sequoia.

                                                                                1. 5

                                                                                  I don’t think they’re entirely separate. Up until Sequioa, the OpenPGP standard meant using GnuPG. It’s not uncommon for lay GnuPG users to confuse the standard with the implementation. OpenPGP = GnuPG for most of the PGP community.

                                                                                  Sure, there were other tools that weren’t frontends to GPG that implemented the OpenPGP standard, but they failed to gain any real traction from what I can see (I’m not sure PGP gained any real traction in the broader security community anyway). I think OpenBSD tried their hand at an OpenPGP implementation? I don’t recall what it is/was. I know they have signify for signing and verifying packages (which isn’t OpenPGP), but I don’t know if they implemented an encryption/decryption tool. NeoPG gave up due to the complexity of the OpenPGP standard and “competing” with GPG.

                                                                                  And as mentioned in my original reply, I really like Sequoia. It’s just unfortunate it took the EFAIL vulnerability to kind of kick OpenPGP in the ass and see if the greater PGP community can’t do much, much better. The Sequoia PGP library, the OpenPGP CA web of trust, and 3rd party online identity proofs like Keybase and Keyoxide are definitely improving the landscape. Just the fact that Sequoia is taking the good parts out of the standard, while refusing to implement the footguns and “90s crypto” is enough to take note of.

                                                                                2. 4

                                                                                  OpenPGP defines and supports old primitives, including RSA, DSA, MD5, SHA-1, IDEA, 3DES, CAST5, El Gamal, RIPEMD160, and many more. age(1) is highly opinionated, supporting only modern primitives of X25519, HKDF-SHA256, HMAC-SHA256, scrypt, RSAES-OAEP_SHA256, and the system CSPRNG.

                                                                                  This in particular is something I hear a lot, but I don’t buy the argument that this is inherently a problem. It makes sense that you would want to be able to decrypt files from 10+ years ago without having to try and compile an ancient version of encryption software. As long as the applications and tooling you use does not use these older ciphers and primitives unless you explicitly tell it to, then I see no problem.

                                                                                  Sequoia even goes further than having sane defaults, by implementing a policy mechanism to decide which ciphers and features are allowed when invoked. This means the tool or application you are using will be able to ensure that Sequoia will not use any cipher except for the ones specified by said application/tool.

                                                                                  1. 2

                                                                                    This in particular is something I hear a lot, but I don’t buy the argument that this is inherently a problem. It makes sense that you would want to be able to decrypt files from 10+ years ago without having to try and compile an ancient version of encryption software. As long as the applications and tooling you use does not use these older ciphers and primitives unless you explicitly tell it to, then I see no problem.

                                                                                    The problem is that standard doesn’t clearly draw a line in the sand on what is safe and what isn’t. E.G., modern cryptography is moving away from RSA towards ECC, due to the footguns RSA ships that ECC does not. Standards should be expired as vulnerabilities are discovered and modern replacements are agreed upon, and software should sunset support for the expired standards at new version releases.

                                                                                    This is why age(1) is so attractive–it’s highly opinionated on its cryptographic primitives, rather than supporting cipher agility. If a catastrophic vulnerability is found in ChaCha20-Poly1305, it’s trivial for age to release a new software version that switches out ChaCha20-Poly1305 for a more secure cipher, with the clear announcement that you should migrate your old ciphertexts, and that support for older versions will no longer be supported.

                                                                                    1. 1

                                                                                      Standards should be expired as vulnerabilities are discovered and modern replacements are agreed upon, and software should sunset support for the expired standards at new version releases.

                                                                                      I’ll back this up with an example. In this issue on GitHub, Filippo proposes deprecating and/or removing from the Golang x/ libraries (which are “officially supported” but not part of the standard library).

                                                                                      • Blowfish
                                                                                      • bn256
                                                                                      • CAST
                                                                                      • MD4
                                                                                      • OTR
                                                                                      • RIPEMD160
                                                                                      • TEA / xTEA
                                                                                      • Twofish
                                                                                      • XTS
                                                                                      • OpenPGP
                                                                                  2. 2

                                                                                    It appears you threat gpg and OpenPGP as indentical. This is not the case. And se of your arguments appear to just stream from the fact that OpenPGP has a multiple decade long history. Of course you will find outdated methods here, and of course implementations still support them, because users should still be able to decrypt their old data.

                                                                                    Furthermore, I am not sure if sufficient satisfying Forward Secrecy with a heavily asynchronous communication medium (E-Mail) is possible. The conditions are different when compared with, e.g., Instant Messaging.

                                                                                    1. 2

                                                                                      Unfortunately downgrade attacks are a real and annoying thing, as well as security issues in very old and not-often-used code. Throwing out that whole part and maybe providing a “gpg_with_old_stuff” separately if you really need it may not be a bad idea.

                                                                                    2. 2

                                                                                      I definitely agree with you. But to be fair, age(1) does not have any authentication built into it. Which, while i love age in general, I think is an oversight, and why I don’t think it should be considered as a drop-in replacement for GPG for something like email encryption.

                                                                                      Also, keys are described as “ephemeral”. I don’t use GPG, but I quite like the idea of just putting a pubkey on my website and leaving it at that. Again, another reason why I don’t think you should use it for long-term email encryption.

                                                                                      Edit: I suppose I didn’t consider that some people say that you should rotate asymmetric keys frequently (e.g. SSH keypairs), in which case I suppose age’s small and ephemeral keys would probably be a plus.

                                                                                      1. 3

                                                                                        The meta-point of this post (mentioned by me in a sibling comment) is that PGP tries to do a bunch of stuff with one application, and is mediocre at most of them.

                                                                                        Back in the MC Hammer era from which PGP originates, “encryption” was its own special thing; there was one tool to send a file, or to back up a directory, and another tool to encrypt and sign a file. Modern cryptography doesn’t work like this; it’s purpose built. Secure messaging wants crypto that is different from secure backups or package signing.

                                                                                        1. 3

                                                                                          Yeah I totally agree with that, I’m firmly in the “GPG needs to die” camp.
                                                                                          But am I wrong in saying that the main purpose of GPG these days is [supposed to be] email encryption? My point is I don’t think age alone satisfies that.

                                                                                      2. 2

                                                                                        The UX of GPG is a nightmare.

                                                                                        Agreed but what would you expect from a software modeled after PGP 2.x released in the 90s?

                                                                                        As a result, the GPG documentation is incredibly lengthly. The gpg(1) manpage alone is 56+ pages, while age(1) is 1+. The OpenPGP specification spans several RFCs, while the age is a single Google doc.

                                                                                        You may be happy to hear that the OpenPGP specification is getting “refreshed” and consolidated: https://gitlab.com/openpgp-wg/rfc4880bis#openpgp-cryptographic-refresh-of-rfc-4880

                                                                                        For some people “google doc spec” is enough, others like legal protections that the IETF process guarantees. I think there are reasons why HTTP/2 or TLS are maintained at IETF and a google doc is not enough.

                                                                                        Until just recently, GnuPG only used weak KDFs, and it still defaults to KDF_ITERSALTED_S2K. age(1) has always used scrypt.

                                                                                        Yep, definitely. GnuPG should improve on that part. Still, storing private keys in software is a bad idea in general.

                                                                                        GnuPG’s MDC authentication is broken. age(1) ciphertext is authenticated with ChaCha20-Poly1305.

                                                                                        MDC was a product of its times and the work is underway for authenticated encryption in OpenPGP.

                                                                                        OpenPGP keys are not forward secret. There is an RFC draft (another one!), but it’s not without problems. However, age(1) keys are also not forward secret.

                                                                                        In some use-cases forward secrecy is a natural fit (like instant messaging) in others like long-time encrypted backups not so. Trivia: double-ratched protocols can be build on top of OpenPGP too: https://sequoia-pgp.gitlab.io/openpgp-dr/openpgp_dr/index.html

                                                                                        OpenPGP defines and supports old primitives, including RSA, DSA, MD5, SHA-1, IDEA, 3DES, CAST5, El Gamal, RIPEMD160, and many more. age(1) is highly opinionated, supporting only modern primitives of X25519, HKDF-SHA256, HMAC-SHA256, scrypt, RSAES-OAEP_SHA256, and the system CSPRNG.

                                                                                        Then how does encryption to SSH keys that are RSA-based work? E.g. I just tried age -R ~/.ssh/id_rsa.pub -a message and it worked just fine.

                                                                                        OpenPGP leaks metadata. age(1) only leaks its version format and public keys of recipients.

                                                                                        Interesting because OpenPGP packet containers contain version and key IDs too. And the link you added says just that so this similar to age?

                                                                                        The SKS keyservers are vulnerable to a plethora of vulnerabilities, thus killing the Web of Trust, killing the PGP Strong Set, and ultimately GnuPG ignoring all signatures coming from a keyserver. age(1) doesn’t and will not support signing or key distribution.

                                                                                        Web of Trust still works in small circles of technical people, for example see the kernel.org keysigning map. Note that even if SKS is broken it doesn’t mean that the WoT is broken since the signatures are “decentralized” (it’s possible to reconstruct it from SKS dumps). Some other distros use WoT “circles” like that too, see Arch Master Keys or Gentoo developer keys.

                                                                                        Age not supporting or even advertising any way of how keys can be retrieved or checked is avoiding an issue not a feature. I understand they don’t want to bloat the “main” project but this needs to be addressed somehow.

                                                                                        The goal of PGP was to protect email, and it turns out, email can’t be effectively protected. Email is fundamentally unsecurable. age(1) is strictly for encrypting files to other people (or yourself) using their (or your) public key, and that’s it.

                                                                                        Well, I did use age over e-mail the other day with a friend. It still leaked metadata through e-mail headers. Does it mean age is broken? Or if I say “no, PGP is only for file encryption” will the e-mail issue disappear? I don’t think so…

                                                                                        My 2 cents: PGP ecosystem has stagnated over the years. Other protocols, such as HTTP and TLS were properly upgraded to the XXI century and even got some exciting features (like Certificate Transparency). I believe it’s still possible to somehow modernize PGP and tools such as age show how it should be done. I just hope age, and signal, one-party-controlled solutions are not the last word.

                                                                                        1. 2

                                                                                          what would you expect from a software modeled after PGP 2.x released in the 90s?

                                                                                          Slow but serious improvements. Either by new commands and deprecation (like git switch) or by redoing the interface (ifconfig -> iproute2). It’s possible and was ignored.

                                                                                          Then how does encryption to SSH keys that are RSA-based work?

                                                                                          RSAES-OAEP_SHA256 was on the list.

                                                                                  1. 5

                                                                                    Anyone here switched to alacritty and can tell the difference? I’d be interested to hear more “success stories”. I personally found it to be too barebones and hard to configure. Probably also due to the fact that I like my current terminal’s tab feature.

                                                                                    Anyone who switched from a tabbed terminal emulator? Or are all the converts already heavy subscribers into tiling window managers and screen/tmux workflows?

                                                                                    1. 14

                                                                                      Just use what you like if it’s not broken. Don’t worry about switching to a new terminal emulator because it out performs your mainstay on paper. My 2 cents.

                                                                                      1. 2

                                                                                        I’m using it everyday and here are the features that I love:

                                                                                        • cross-platform and with configuration in a single yaml file (that can be templated and easily shared with other computers in git);
                                                                                        • it’s fast without using a lot of resources;
                                                                                        • it doesn’t have a lot of bloat everywhere (for example tabs that you mention).

                                                                                        I personally use tmux so I don’t feel the need for tabs (nor multiple terminal windows). I don’t use any tilling wm, but tmux helps a lot to bridge this gap in the terminal. Before that I was using iTerm2 at work for example, but it wasn’t portable between the 3 different OS I use (Windows, macOS and Linux). The configuration was painful because all different terminal emulators were using a different set of features, with different defaults…

                                                                                        Now I have an unified experience without any drawback from my point of view.

                                                                                        1. 1

                                                                                          I’ve been using it for a few years on linux + macosx(work laptop). It’s good to have a consistent terminal even on switching OSes. I can definitely notice tailing logs and other things faster. Although there’s a learning curve and some sacrifices involved like “no tabs”. I also have always used tmux(with tmuxinator) heavily for other features. Thus i gradually weaned away from tabs pretty much everywhere terminal, text editor(emacs), and currently trying out nyxt(browser).

                                                                                          In case you still want tabs, you can also look at WezTerm: https://wezfurlong.org/wezterm/ which is also GPU accelerated, cross-platform but has tabs.

                                                                                          1. 8

                                                                                            Kitty (https://sw.kovidgoyal.net/kitty/) is another nice cross platform GPU terminal with tabs

                                                                                          2. 1

                                                                                            I switched from iTerm2 on macOS maybe 4 months ago, and I had never used tmux before. After customizing the Alacritty theme using the provided Dracula colors, setting up tmux with a sweet setup, and managing all my things through nix-darwin (I was already doing this), I’m super happy with it and love it. Here are a couple of pics from a while ago: https://twitter.com/RobertWPearce/status/1430672069042319360

                                                                                            1. 1

                                                                                              Same here, no tabs is the show stopper for me. I also use the profile option for switching between light/dark themes of Konsole. Yeah of course you can just use tmux, but I’m using a desktop with a real GUI, so I want a real tab integration and not something that exists only inside the virtual terminal. Meanwhile I can just put tabs into a new window, move them side by side etc.

                                                                                            1. 1

                                                                                              most of the problems people recall having had tried Tox years ago in this thread do not apply anymore. With qTox, calls, file transfers, and group chats all work.

                                                                                              It’s nice to have a chat system that cares about metadata protection, and has no option to turn off encryption.

                                                                                              1. 2

                                                                                                group chats

                                                                                                Glad to see they are finally implemented.

                                                                                                1. 1

                                                                                                  I tried Tox a few years ago and there were two big problems:

                                                                                                  • I couldn’t easily move a conversation between devices. There were plans to add this to the protocol, but they were all in the design phase.
                                                                                                  • The peer-to-peer nature of the protocol made it very battery-intensive on mobile devices.

                                                                                                  There were some smaller problems as well. I don’t think the protocol had had a rigorous security review by any cryptographers and the UI for setting up an account and connecting to contexts was a bit much for most non-technical folks.

                                                                                                  I looked at Tox, GNU Ring, XMPP (again), and Signal at the time, and Signal was the only one that met all of my requirements (in no particular order):

                                                                                                  • Desktop and mobile clients with feature parity between them.
                                                                                                  • Conversations can move between devices without the remote party needing to do anything.
                                                                                                  • Protocol subject to external adversarial security review.
                                                                                                  • Complete data and metadata encryption.
                                                                                                  • A setup process I was confident that my mother could do without in-person assistance
                                                                                                  • Mobile clients that won’t drain the battery or otherwise annoy people I want to talk to who use their phone in preference to a desktop for messaging

                                                                                                  Tox came second but since I started using Signal I’ve been incredibly happy with it (in particular, how easy it is for folks who have used WhatsApp to install it and use it instead) and an alternative would need a compelling reason to make me switch. Tox doesn’t provide one.

                                                                                                  1. 1

                                                                                                    Yes, Signal is more similar to WhatsApp. It doesn’t have the same degree of metadata protection as Tox, and it’s centralized.

                                                                                                    If we want better systems, then they have to start somewhere.

                                                                                                    1. 1

                                                                                                      It doesn’t have the same degree of metadata protection as Tox,

                                                                                                      In what way? The only metadata that is not protected in Signal is the last time you connected to the network. In Tox, if I remember the protocol correctly, this is visible to anyone who can get your public key.

                                                                                                      and it’s centralized.

                                                                                                      It has a centralised component but it is designed such that this component is out of the TCB for confidentiality or integrity, only for availability. As such, it combines the reliability of a centralised system with the security of a peer-to-peer one.

                                                                                                      If we want better systems, then they have to start somewhere.

                                                                                                      They did, and that’s what I’m using.

                                                                                                      1. 1

                                                                                                        In what way? The only metadata that is not protected in Signal is the last time you connected to the network. In Tox, if I remember the protocol correctly, this is visible to anyone who can get your public key.

                                                                                                        I can’t speak to Signal’s metadata protection. I’ve heard various claims -some contradictory- about what information the server can and cannot observe, and I am not in the position to audit the source code myself.

                                                                                                        With Tox you have short-term keys which are used in the DHT which are not connected to your longterm keypair, so knowing these keys does not reveal anything except “some node” is participating in the DHT. There is also onion routing which in combination with the aforementioned temp keys allows for nodes to make and respond to friend requests without nodes knowing where they are (or what IP they are connected with).

                                                                                                        It has a centralised component but it is designed such that this component is out of the TCB for confidentiality or integrity, only for availability. As such, it combines the reliability of a centralised system with the security of a peer-to-peer one.

                                                                                                        What do you mean “reliablilty of a centralised system”? When Signal has gone down multiple times in the past year, I’ve continued using Tox… I find this design more reliable. Maybe you mean something else.

                                                                                                        Also “[…] the security of a peer-to-peer one” - I don’t think p2p systems are inherently secure. I think Tox has made a lot of progress in making their p2p system secure. I also don’t think Signal is in any way a p2p system, so again, not sure what you mean by this.

                                                                                                        As an aside: I don’t hate Signal, or try to dissuade anyone from using it. I tell most anyone who asks me what messenger to use to simply use Signal. I think it’s not up for argument that it’s made it much further than any encrypted-messenger has before, and its encryption protocol is great. I just don’t think it’s the be-all-end-all of privacy-preserving messaging, and my only true complaint is that I think it’s made people stop striving for more because it’s so good.

                                                                                                1. 2

                                                                                                  I remember this being a /g/ project way back when, but it seems like Matrix ate their lunch at a certain point. Good to see it’s still in development!

                                                                                                  1. 3

                                                                                                    Matrix is not a replacement for Tox.

                                                                                                    • Tox has no home-servers. Identities are generated locally.

                                                                                                    • Encryption is built into the protocol. It is not bolted on later or optional.

                                                                                                    • It is peer-to-peer instead of federated.

                                                                                                    • Calls actually work much more often than they do with Matrix (anecdotal)

                                                                                                    • Tox has several clients not written in Javascript.

                                                                                                    1. 3

                                                                                                      I’m pleasantly surprised a /g/ project made it past the “I’ll make the logo” phase into something actually usable.

                                                                                                    1. 4

                                                                                                      There are many similarities between the two, though I don’t have any experience with Nix to comment further here.

                                                                                                      I don’t understand why one would use Guix without trying Nix first.

                                                                                                      1. 9

                                                                                                        What I gathered is that the author has a preference for Scheme and thus also for Guix, which I think is a sufficient argument.

                                                                                                        However, I’d be very interested in reading an in-depth comparison between the two projects, as they target the same niche and are quite related.

                                                                                                        1. 3

                                                                                                          This is basically my position. I’ve used Nix a couple times but Guix is preferable.

                                                                                                        2. 7

                                                                                                          Early on Nix had the opposite problem. You would ask it to install firefox, and completely unprompted it would install the adobe flash plugin to go with it. They told me if I wanted firefox without that awful shit to install a separate “firefox-no-plugins” package or something ridiculous like that.

                                                                                                          It hasn’t done that for a while, but it’s taken like a decade to recover from the lost trust. I couldn’t handle the idea of running an OS managed by people capable of making such a spectacularly bad decision.

                                                                                                          1. 2

                                                                                                            Doesn’t that depend on your affinities? If you like lisp languages then Guix seems like a logical choice. I guess it also depends on what software you depend on in your day-to-day activities.

                                                                                                            1. 5

                                                                                                              There is more to consider here:

                                                                                                              Guix is a GNU project with the unique lenses and preferences that come along with that. It is also a much smaller community that Nix, which means less packages, less eyes on the software, and I would argue also less diversity of thought.

                                                                                                              I personally prefer Scheme to Nix’s DSL-ish language, but as a project I think Nix is in a much better position to deliver a reasonable system of such levels of ambition.

                                                                                                              It also frustrates me that Guix tries to act like it’s not just a fork of Nix, when in reality it is, and it would be better to embrace this and try to collaborate and follow Nix more closely.

                                                                                                              Unfortunately the GNU dogma probably plays a role in preventing that.

                                                                                                              1. 4

                                                                                                                It is also a much smaller community that Nix, which means less packages

                                                                                                                If you convert Nix packages to Guix packages, you can get the best of both worlds in Guix. But that’s admittedly not a very straightforward process, and guix-import is being/has been removed due to bugginess.

                                                                                                            2. 2

                                                                                                              I’m aware of both, but I tried Guix first because people I know use it, I like the idea of using a complete programming language (even if I dislike parens), and the importers made getting started easy. Also guix is just an apt install away for me.

                                                                                                            1. 1

                                                                                                              Consider making a link to the source code more prominent.

                                                                                                              1. 3

                                                                                                                I’m probably going to release the source, as mentioned, but there’s some stuff I’d like to clean up before I do.

                                                                                                              1. 2

                                                                                                                Very interesting. I want one even if I don’t really have an application for it. If it had ECC ram one could build a very nice NAS with this.

                                                                                                                1. 1

                                                                                                                  Why do you need ECC RAM for a NAS?

                                                                                                                  1. 2

                                                                                                                    I would use software raids, and if there are ram problems there you can run into inconsistency. Also, if the data arriving at the machine is corrupted in ram before being “handed” to your fancy check-summing file system then you are out of luck.

                                                                                                                    Although, in researching my reply I found an excerpt of the of the BSD now podcast talking about using ZFS without ECC, and the inventor of ZFS says it’s not too bad, and ZFS is probably the best FS to use if you have to use one without ECC: https://www.youtube.com/watch?v=XMXUUWgXzLY&t=492s

                                                                                                                    1. 1

                                                                                                                      y not?

                                                                                                                      1. 1

                                                                                                                        Bits really do flip at random.

                                                                                                                        Those well aware of this tend to have difficulty sleeping at night w/o ECC.