Threads for sjamaan

    1. 7

      This might be the final nail in the coffin. Many people have already given up on PGP due to its usability problems. Splitting the remaining “community” in two over technical implementation differences will not make things any better.

      1. 2

        This might be the final nail in the coffin.

        There is probably a coffin in the making, but it is not OpenPGP’s coffin. Let me elaborate:

        The achieved degree of usability is mostly decided by the implementation, the specification has a lesser impact on usability. My user experience with GnuPGP was and is far below average, even though I would consider myself knowledgeable when it comes to OpenPGP. My impression is that many people struggle with GnuPGP’s interface and I haven’t noticed any improvement in GnuPGP’s usability in the last decade.

        On the other hand, there are OpenPGP implementations that strive for a good user interface. Sequioa PGP and PGPainless for example. Those implementations were partly a result of the huge usability gap that GnuPGP left unfilled for many years. And most of those implementations are backing draft-ietf-openpgp-crypto-refresh (i.e., the LibrePGP alternative).

        If GnuPGP wants to go along with LibrePGP, then this is fine by me, as it will just further isolate GnuPGP from the rest of the ecosystem. LibrePGP appears to me like a nail in GnuPGP’s coffin.

    2. 7

      This was an interesting debugging story with a very disappointing ending. They never really figured out why it was specifically that gif that triggered the bug, and how it’s even possible that a (supposedly) completely unrelated piece of software manages to break the browser (remember, they also tried with a profile without any extensions enabled).

      1. 1

        It was on HN, and people there also expressed disappointment.

        I am wondering what kind of hooks Grammarly has into the system that differ from a regular extension or plugin. Someone else mentioned that they also had a website that started crashing, and Grammarly was the culprit there as well.

    3. 5

      Very nice, it’s so much nicer to use SXML than HTML, even in the browser

    4. 4

      Yeah, that’s why “eat your own dogfood” works so well. Because then, the developer is suddently everything at once and hence also optimizes correctly for all roles.

      1. 15

        It’s also why ‘eat your own dogfood’ works so badly, because developers are an incredibly unrepresentative set of users (ignoring all other diversity factors, they are all people who are comfortable writing software). One of the key things here is that developers tend to have a very strong overlap with the subset of the population that finds hierarchies a natural organisational structure and so builds trees where most people would prefer tags and filters.

        1. 3

          I mean, you have a point. But it’s still better than if they don’t get in touch with their own work/problems at all. :-)

          1. 6

            Doing UAT with an unrepresentative set is better than not doing UAT at all. The main benefit of dogfooding is that the people that get annoyed by bugs are able to fix them.

            It was very annoying at Microsoft that I got stuck on buggy pre-release versions of software but didn’t have a communication channel to the team responsible for writing them to work through bugs. The only product where my running the pre-release version actually improved things was the terminal, because they use a GitHub issue tracker and Dustin would triage my bugs and help get them fixed.

        2. 3

          I like your reminder the developers don’t represent general users. Most of that dogfooding takes an even worse subset, namely only looking at the developers who wrote the software. They are even less representative.

          Who experiences the software like users do?

          users > general devs >> devs who built this
          

          Too much background knowledge makes the devs who wrote the software too different from the users: their dogfooding experience will help, but is not representative.


          Same for system adminstration, by the way. DevOps responsibilities make developers eat their own ‘operator dogfood’, but the devs’ knowledge can get in the way of designing operability.

          Who really experiences how hard the system can be to maintain?

          new admins > familiarized admins >> devs who built this
          

          Again, too much background knowledge makes the devs too different from the sysadmins: their dogfooding experience will help, but is not representative.


          Eat no dogfood, and you certainly won’t improve your dogfood. So eat your dogfood.

          But be aware you’ll stop tasting your own dogfood and start liking it. So invite real users and sysadmins, and carefully observe how they like your dogfood.

          1. 1

            I always thought the dogfooding was done to shake out bugs more than for the user experience. Also in those days users probably were somewhat more technical than nowadays

            1. 1

              Yeah, I’d agree that the benefits of dogfooding have always mainly been as a quality check, not a design one.

    5. 3

      I think its interesting that ops never shows up on the left hand side of these little priority models, like

      ops > users > dev

      I’ve been thinking for a while now that I really want to blend “ops” and “users” into the same theoretical framework just like the author blended “author” and “maintainer” together under “dev”,

      1. 5

        I’d say ops > users > dev would be when software isn’t updated to the newest version because it’s too hard to deploy, or when security measures cause important features to be disabled.

        I’ve seen this happen in a specific case with Sentry, where ops would stay on the old codebase because the new version had only a Docker image as the officially supported installation (see also debianized-sentry for people bending over backwards to try and keep the old way working for newer Sentries - it looks like they failed to maintain it).

        Perhaps a more common thing to see is completely locked-down workstations where the users aren’t allowed to install anything. That’s definitely a case of ops > users. But in the end that’s probably because biz > users, and maybe even other users > users - if someone installs some random piece of virus-riddled software, that affects the business and other users.

        1. 2

          Locked down workstations is

          biz > ops > dev

          I think the “users” in this model are the end-users that have to use the company’s product, they are typically not employees.

          My vision for ops > users > dev would be like:

          • Cutting features and doing extra Dev work to make the software more efficient and keep the minimum system requirements low. (Sort of like bitcoins eternal 1mb block size limit, but less politically motivated, I guess)
          • multiple easy options for deployment, whatever gets in the way of having the deployment be easy will get cut.
          • Applying usability principles to make the software as easy to deploy and maintain as possible
          • the economic producer is always right, not the consumer is always right
      2. 1

        Isn’t that backward? Should be users > ops > dev at least

        (edit: and looks like this is in the article)

        1. 2

          That’s my point, that we sort of take it as a given that ops should always be shouldered with the burden of supporting whatever the business dreams up to sell to the all-important users.

          That assumption makes sense in a professional context where the ultimate goal is to make money, but I think it leaks out of the professional realm into foss and gray areas more aligned with hobbyists, civic engagement, etc.

          I am just saying as far as “outside of work” goes, maybe we should challenge that assumption. Or put another way, lump ops in with the rest of the users because they “use” the software by operating it.

    6. 15

      Excellent, clear, to the point. Well done.

      One minor confounding factor that might be worth exploring: users in aggregate versus users in specific, ditto for other quantities.

      I’ve seen codebases and businesses contort themselves into pretzels trying to cater to one user or use-case and ignore the larger or more relevant masses.

      1. 9

        and conversely software becoming a space shuttle cockpit by trying to have all features for all users, instead of focusing on a specific group.

        1. 9

          This was Steve Jobs’ superpower. He had fairly good taste and a set of requirements that overlapped a fair chunk of users and insisted that products were developed for him, not for any aggregate model of users. Sometime this went badly (the NeXT computers required someone as rich as him to afford them and the iPod had volume settings for someone as deaf as him and could cause ear damage), but when your requirements were close to his they were amazing products.

          1. 3

            NeXT computers required someone as rich as him to afford them

            That’s pretty unfair!

            The original NeXT was $6500, and the 68040 NeXTStation was $4995.

            In comparison, the Mac II was $5498 and the IIfx was $8969. The first computer I ever owned personally, a Mac IIcx purchased because I’d just bought a cheap Chinese 2400 bps modem at MacWorld SF and a local BBS was providing uucp-based mail and “news”, was $5369, which would have been about 2-3 months salary.

            Intel 80386 and 80486 PCs from name brand manufacturers were similar prices.

            1. 14

              The original NeXT was $6500, and the 68040 NeXTStation was $4995.

              In today’s money, $6,500 works out at $16,905.00. That’s pretty close to the annual salary (before tax) of a minimum-wage worker, and not exactly cheap for anyone else. And that was the base model, you needed a RAM upgrade and a hard disk (it came only with MO) for it to be a useful machine.

              Intel 80386 and 80486 PCs from name brand manufacturers were similar prices.

              And most of these manufacturers also sold 286 and 8086 machines because the 386s were too expensive for most customers. For most of the ’90s, you could buy a fairly decent (not top of the range, but quite usable) PC for a bit over £1000. That was a big expense for most households. The cheapest NeXT machine was several times more expensive.

              NeXT sold a total of 50,000 computers, over 11 years of operation. They received glowing reviews throughout that time and so there were a lot of people that wanted them, but very few that could afford them.

              To put that number in perspective, Compaq sold 53,000 computers in their first year of operation, in 1982. They sold the first 386 systems and their sales numbers for those were only a bit higher than the NeXT Computer, but they sold almost exclusively to businesses that had existing x86 software that they needed to go faster. They made a ton of money from this, but the 286 massively outsold them in volume for several years. The 386s were high-margin machines (twice as fast as competitors, running the same software).

              The Apple II, which aimed at the mass market, was $1,298 in 1977. Inflation adjusted, that’s $2,533.88 in 1988 dollars, slightly more than a third of the price of the NeXT. The Commodore PET (released the same year) was $795. The PET sold 219,000 units - more than four times as many computers as NeXT sold in total. The PET was not even close to Commodore’s most successful machine, the C64 sold over 12 M units, in a similar time to NeXT.

              1. 1

                And most of these manufacturers also sold 286 and 8086 machines because the 386s were too expensive for most customers.

                Which were not at all comparable to a NeXT, Mac II, or 386/486 PC except in the trivial Turing equivalence sense.

                Yes, 32 bit computers were very expensive in the late 80s. Most people couldn’t afford them. That doesn’t make NeXT overpriced in comparison to comparable machines, or require you to be “rich” to afford one. It simply required planning and prioritising in the same way as buying a new car.

                I sure wasn’t rich in 1989, just a couple of years out of university, but I found the money for that Mac IIcx, partly by buying a 0/0 machine (no RAM, no hard disk, I installed cheaper 3rd party parts) and greyscale 640x480 monitor and video card rather than a larger or colour setup.

                If you were using the computer to earn a living, not just as a toy, then it was well worth spending the money for a ’386 or ’030 machine. Some people with undemanding needs would get by with a ’286 or 68000, but there is no way that a C64 or Apple ][ cut it for work in the late 80s. As a toy for games, sure.

              2. 1

                I bought my first computer in 1988, it was a 286 laptop with 1Mb of ram and cost £999, a year later I upgraded to 2Mb for an extra £400. The only reason I didn’t buy a standard PC is that a laptop suited by itinerant lifestyle, and Apple and NeXT were out of my price range as a student…

      2. 2

        I’ve seen codebases and businesses contort themselves into pretzels trying to cater to one user or use-case and ignore the larger or more relevant masses.

        I’ve seen this happen mostly when the company wasn’t doing too well and they needed that one user’s business too badly. A company that’s doing well can (typically, not always) take a wider view and realize that they shouldn’t destroy a good product just to reel in that one customer that wants a weird-ass feature that doesn’t integrate well.

        It can also happen when the company has non-technical leadership which doesn’t listen to the tech folks regarding what a change will do to the stability or quality of a codebase.

        Of course, when both the above come together it’s a recipe for disaster.

    7. 5

      Loved that final line about CAPTCHAs.

      More seriously, it’s likely the situation will get so bad that search engine developers will have to start getting creative and figure out a way. These businesses can’t afford their search engines to become so useless that people will stop using them. Or some new approach for finding useful content comes up from unexpected corners.

    8. 2

      I don’t understand how “atomically updating a file” is better than “atomically updating a file”? if the hypothetical json was atomically updated each delivery, stuff like “the power might fail after only one of the modified blocks has been written out.” doesn’t happen.

      Of course, the even-more-fun method is to store all the information in filenames

      $ ls
      email1@domain  email2@domain  email3@domain
      

      and when you have delivered, just delete the file!

      1. 2

        Of course, the even-more-fun method is to store all the information in filenames

        With a large number of files, that’s going to drive some filesystems into the ground (especially around the time qmail was being designed.) You’d need to hash the files into a number of levels of subdirectories and then I guess you may run into non-atomic directory traversal problems when creating or deleting the files and directories?

        1. 7

          Heh, this reminds me of two hacks from 20 or 25 years ago:

          • At Demon Internet it was common to lay out directory trees for things like web hosting or email like …/u/s/e/username or (cunning trick) the slightly more balanced reverse ordered …/e/m/a/username - there were many examples like this shared as samizdat ops wisdom in places like usenet or SAGE/LISA.

          • FreeBSD’s dirhash feature for speeding up directory traversals. The wikipedia article omits the backstory that Ian Dowse and David Malone were sysadmins in the mathematics department at Trinity College Dublin; one of the professors was an MH user whose mail was out of control. So obviously in this situation the easiest way to fix the performance problems was an innovative change to the kernel which eliminated the O(n) traversals.

          Ops jobs in the 1990s were fun.

          1. 1

            Ops jobs in the 1990s were fun.

            Thankfully I was only doing that in the back half of the 90s (netlink) and early 00s (easynet) when smarter people had solved most of the problems :)

      2. 1

        Yeah, you could do some sort of write-to-tempfile-and-then-swap so that the “actual” file containing the data would only be considered if it was written completely. Of course it would be a lot slower, but it’d work the same way, and you get to use JSON or some other format that’s easier to inspect.

        1. 4

          If you do the tempfile dance then you have to worry about the non-atomicity of data vs metadata writes: without an fsync() the tempfile is likely to not be written before the directory is updated. So if the file is a todo list (like in qmail) a crash can leave you with lost mail, whereas qmail’s design might duplicate mail but not lose it.

          Another option is to write an append-only list of completed deliveries, and when reading the list be careful about truncated entries or trailing empty blocks that might arise from a crash.

    9. 8

      I mostly rely on psql for querying and such. It really pays off learning its ins and outs! For performance analyses there’s three nice tools I’ve used:

    10. 11

      This is maybe the first thing I’ve read about ML models this year that makes immediate sense to me. Fascinating and encouraging to think we might make sense of what’s going on in there.

      1. 7

        Agreed, this is an absolutely fantastic writeup. It’s pretty wild that this “superposition” is possible. Like a sort of compression that gets lossier as it tries to represent more concepts. That also might become a more manageable way of detecting overtraining. Who knows, maybe we’ll be able to dynamically adjust neural network sizes as we’re training them to compensate for the “noise” being generated.

    11. 14

      Is Typst able to match TeX’s quality for hyphenation and line length balancing yet? Every document I’ve seen so far looks worse than even MS Word in terms of line splitting.

        1. 10

          Look at the images in the link. For example this one, it’s making hilariously bad line-breaking decisions.

          For example, it decides to break “animo” into “an- imo”. Keeping the word together but shifting it to the line below would barely have an effect on the first line, but would significantly improve readability.

          And it’s doing that in every single typst example I’ve seen so far.

          1. 2

            I think that’s a decent decision, since moving the “an” to the next line would cramp it and cause the “permagna” to be split. There is enough space in the line after to move a few characters, but I think breaking “an- imo” is better than “permag- na”.

            Of course, I’m no expert, and those are just my two cents.

            1. 5

              Regardless of the decision to break it up, it should be “a-ni-mo”, not “an-imo”.

              1. 3

                Typst uses the same hyphenation patterns TeX does. In the example, it is most likely hyphenating Latin with rules for English. Which isn’t great, but setting the language to Latin for this example also isn’t helpful in a tutorial.

              2. 1

                I’m not disagreeing, just wondering what rule should be in invoked when hyphenating words (I assume in English, even if the example text is pseudo-Latin). Is that the second part of the hyphenated word should start with a consonant?

                1. 12

                  I assume in English

                  For extra fun, English and the fork spoken on the other side of the pond have completely different hyphenation rules. In English, hyphenation decisions are based on root and stem words, in the US version they are based on syllables.

                  1. 8

                    “Two countries separated by a common language.”

          2. 1

            I’m curious about what LaTeX is doing to get better line-breaking decisions, because that isn’t something I noticed before you pointed it out. Is it a fundamental algorithmic choice related to why LaTeX is multi-pass?

            1. 20

              TeX hyphenation works as a combination of two things. The line breaking uses a dynamic programming approach that looks at all possible break points (word boundaries, hypenation points) and assigns a badness value for breaking lines at any combination of these and minimises it (the dynamic programming approach throws away the vast majority of the possible search space here). Break points each contribute to badness (breaking between words is fine, breaking at a hyphenation point is worse, I think breaking at the end of a sentence is better but it’s 20 years since I last tried to reimplement TeX’s layout model). Hyphenation points are one of the inputs here.

              The way that it identifies the hyphenation points is particularly neat (and ML researchers recently rediscovered this family of algorithms). They build short Markov chains from a large corpus of correctly-hyphenated text that give you the probability of a hyphenation point being in a particular place. They then encode exceptions. I think, for US English, the exception list was around 70 words. You can also manually add exceptions for new words. The really nice thing here is that it’s language agnostic. As long as you have a corpus of valid words, you can generate a very dense data structure that lets you hyphenate any known word correctly and hyphenate unknown words with high probability.

              1. 5

                All those cryptic warnings about badness 10000 finally mean something.

                1. 4

                  “underfull hbox badness 10000” haunt my nightmares

                2. 4

                  Yup, there’s a configurable limit for this. If, after running the dynamic programming algorithm, the minimum badness that it’s found for a paragraph (or any box) is above the configured threshold, it reports a warning. You can also add \sloppy to allow it to accept a higher badness to avoid writing over the margin. If you look at how this is defined, it’s mostly just tweaking the threshold badness values.

              2. 2

                I think TeX also tries to avoid rivers, right?

                1. 1

                  Yup, there are a bunch of things that contribute to badness. The algorithm is pretty general.

                  It’s also very simple. Many years ago, I had a student implement it for code formatting. You could add penalties for breaking in the middle of a parenthetical clause, for breaking before or after a binary operator, and so on. It produced much better output than clang-format.

              3. 1

                Huh, it’s surprising to me that you still need an exception list. Can you fix your corpus instead so it has a bunch of examples for the exceptions?

                1. 5

                  Some words, if added to the corpus, would still get hyphenated wrongly, but their influence on the corpus would actually decrease hyphenation accuracy for all other words as well.

                  This mostly applies to loan words as they tend to follow different hyphenation rules than the rest of the corpus.

                2. 2

                  The corpus contains the exceptions (that’s how you know that they’re there). The compressed representation is a fixed size, independent of the size of the corpus and so will always have some exceptions (unless the source language is incredibly regular in its hyphenation rules). A lot of outliers also work because they manage to hit the highest-probability breaking points and are wrong only below the threshold value.

            2. 4

              That’s exactly the reason why it has to be multi-pass, why it’s so slow and part of why TeX was created in the first place.

              TeX ranks each possible line break and hyphenation position and tries to get the best score across an entire paragraph or even across an entire document if page breaks are involved, in contrast to MS Word which tries to get the best score for any two adjacent lines or Typst which just breaks and hyphenates whenever the line length is exceeded.

              1. 18

                It’s worth noting that ‘slow’ means ‘it takes tens of milliseconds to typeset a whole page on a modern computer’. Most of the slowness of LaTeX comes from interpreting complex packages, which are written in a language that is barely an abstraction over a Turing machine. SILE implements the same typesetting logic in Lua and is much faster. It also implements the dynamic programming approach for paragraph placement. This was described in the TeX papers but not implemented because a large book would need as much as a megabyte of RAM to hold all of the state and that was infeasible.

                1. 2

                  This reminds me, I never understood why typst got so much attention while SILE seems ignored. Wouldn’t SILE be an equally good replacement for the OP?

                  1. 3

                    Simon has not done a great job at building a community, unfortunately. I’m not sure why - he’s done a lot to change things for other people’s requirements but that hasn’t led to much of a SILE community. In part, he didn’t write much documentation on the internals until very recently, which made it hard to embed in other things (I’d love to implement an NSTypesetter subclass delegating to SILE. The relevant hooks were there, but not documented). This has improved a bit.

                    Without a community, it suffers from the ecosystem problem. It looks like it’s recently grown an equivalent of TeX’s math mode and BibTeX support, but there’s no equivalent of pfgplots, TikZ, and so on.

                  2. 2

                    I don’t know that much about SILE, but Typst seems to be tackling a different issue that TeX has - awful convoluted syntax.

                    SILE somewhat gets around this, to be fair - it allows for XML input, which is fairly versatile! But SILE seems more oriented toward typesetting already finished works, while Typst seems to be aiming for the whole stack, even if it has less versatile typesetting.

                    Different focuses, I guess, though I know Typst wants to improve its typesetting quality.

                  3. 2

                    Im not familiar with either SILE or Typst, but maybe the input format is better in Typst for OP?

              2. 3

                It is not true that Typst just hyphenates whenever the line length is exceeded. When enabling justification, it uses the same algorithms as TeX both for hyphenation and for linebreaking. It’s true that hyphenation isn’t yet super great, but not because of the fundamental algorithm. It’s more minor things like selecting the best hyphenation cost etc and then there’s some other minor things like river-preventation that aren’t implemented at the moment. I agree that the hyphenation in the linked example isn’t that great. I think part of the problem is that the text language is set to English, but the text is in Latin.

    12. 38

      Sorry if I sound like a broken record, but this seems like yet another place for Nix to shine:

      • Configuration for most things is either declarative (when using NixOS) or in the expected /etc file.
      • It uses the host filesystem and networking, with no extra layers involved.
      • Root is not the default user for services.
      • Since all Nix software is built to be installed on hosts with lots of other software, it would be very weird to ever find a package which acts like it’s the only thing on the machine.
      1. 20

        The amount of nix advocates on this site is insane. You got me looking into it through sheer peer pressure. I still don’t like that it has its own programming language, still feels like it could have been a python library written in functional style instead. But it’s pretty cool to be able to work with truly hermetic environments without having to go through containers.

        1. 22

          I’m not a nix advocate. In fact, I’ve never used it.

          However – every capable configuration automation system either has its own programming language, adapts someone else’s programming language, or pretends not to use a programming language for configuration but in fact implements a declarative language via YAML or JSON or something.

          The ones that don’t aren’t so much config automation systems as parallel ssh agents, mostly.

          1. 6

            Yep. Before Nix I used Puppet (and before that, Bash) to configure all my machines. It was such a bloody chore. Replacing Puppet with Nix was a massive improvement:

            • No need to keep track of a bunch of third party modules to do common stuff, like installing JetBrains IDEA or setting up a firewall.
            • Nix configures “everything”, including hardware, which I never even considered with Puppet.
            • A lot of complex things in Puppet, like enabling LXD or fail2ban, were simply a […].enable = true; in NixOS.
            • IIRC the Puppet language (or at least how you were meant to write it) changed with every major release, of which there were several during the time I used it.
        2. 15

          I still don’t like that it has its own programming language

          Time for some Guix advocacy, then?

          1. 8

            As I’ll fight not to use SSPL / BUSL software if I have the choice, I’ll make sure to avoid GNU projects if I can. Many systems do need a smidge of non-free to be fully usable, and I prefer NixOS’ pragmatic stance (disabled by default, allowed via a documented config parameter) than Guix’s “we don’t talk about nonguix” illusion of purity. There’s interesting stuff in Guix, but the affiliation with the FSF if a no-go for me, so I’ll keep using Nix.

            1. 11

              Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.

              1. 13

                Indeed, the project whose readme starts with:

                Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.

                That’s exactly the illusion of purity I mentioned in my comment. The “and to avoid any unnecessary hostility” part is pretty telling on how some FSF zealots act against people who are not pure enough. I’m staying as far away as possible from these folks, and that means staying away from Guix.

                The FSF’s first stated user freedom is “The freedom to run the program as you wish, for any purpose”. To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required. Looks like the FSF does not agree with me exercising that freedom.

                1. 11

                  The “avoid any unnecessary hostility” is because the repo has constantly been asked about on official Guix channels and isn’t official or officially-supported, and so isn’t involved with the Guix project. The maintainers got sick of getting non-Guix questions, You have an illusion there’s an “illusion” of purity with the Guix project - Guix is uninvolved with any unfree software.

                  To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required.

                  This is both a fundamental misunderstanding of what the four freedoms are (they apply to some piece of software), and a somewhat bizarre, yet unique (and wrong) perspective on the goals of the FSF.

                  Looks like the FSF does not agree with me exercising that freedom.

                  Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.

                  1. 5

                    Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.

                    Thanks for clarifying what I already knew, but you were conveniently omitting in your initial comment:

                    Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.

                    Using unfree software in NixOS is simpler than in Guix, because you get official documentation, and are able to discuss it in the project’s official communication channels. The NixOS configuration option is even displayed by the nix command when you try to install such a package. You don’t have to fish for an officially-unofficial-but-everyone-uses-it alternative channel.

            2. 4

              I sort of came to the same conclusion while evaluating which of these to go with.

              I think I (and a lot of other principled but realistic devs) really admire Guix and FSF from afar.

              I also think Guix’s developer UI is far superior to the Nix CLI, and the fact that Guile is used for everything including even configuring the boot loader (!).

              Sort of how I admire vegans and other people of strict principle.

              OT but related: I have a 2.4 year old and I actually can’t wait for the day when he asks me “So, we eat… dead animals that were once alive?” Honestly, if he balks from that point forward, I may join him.

              1. 3

                OT continued: I have the opposite problem: how to tell my kids “hey we try not to use the shhhht proprietary stuff here.

                I have no trouble explaining to them why I don’t eat meat (nothing to do with “it was alive”, it’s more to help boost the non-meat diet for environmental etc reasons. Kinda like why I separate trash.). But how to tell them “yeah you can’t have Minecraft because back in the nineties people who taught me computer stuff (not teachers btw), also thought me never to trust M$”. So, they play Minecraft and eat meat. I … well I would love to have time to not play Minecraft :)

        3. 9

          I was there once. For at least 5-10 years, I thought Nix was far too complicated to be acceptable to me. And then I ran into a lot of problems with code management in a short timeframe that were… completely solved/impossible-to-even-have problems in Nix. Including things that people normally resort to Docker for.

          The programming language is basically an analogue of JSON with syntax sugar and pure functions (which then return values, which then become part of the “JSON”.

          This is probably the best tour of the language I’ve seen available. It’s an interactive teaching tool for Nix. It actually runs a Nix interpreter in your browser that’s been compiled via Emscripten: https://nixcloud.io/tour/

          I kind of agree with you that any functional language might have been a more usable replacement (see: Guix, which uses Guile which is a LISPlike), but Python wouldn’t have worked as it’s not purely functional. (And might be missing other language features that the Nix ecosystem/API expects, such as lazy evaluation.) I would love to configure it with Elixir, but Nix is actually 20 years old at this point (!) and predates a lot of the more recent functional languages.

          As a guy “on the other side of the fence” now, I can definitely say that the benefits outweigh the disadvantages, especially once you figure out how to mount the learning curve.

        4. 7

          The language takes some getting used to, that’s true. OTOH it’s lazy, which is amazing when you’re trying to do things like inspect metadata across the entire 80,000+ packages in nixpkgs. And it’s incredibly easy to compose, again, once you get used to it. Basically, it’s one of the hardest languages I have learned to write, but I find it’s super easy to read. That was a nice surprise.

        5. 3

          Python is far too capable to be a good configuration language.

        6. 3

          Well, most of the popular posts mainly complaint about the problems that nix strive to solve. Nix is not a perfect solution, but any other alternative is IMO worse. The reason for nix’s success however is not in nix alone, but the huge repo that is nixpkgs where thousands of contributors pool their knowledge

      2. 8

        Came here to say exactly that. And I’d add that Nix also makes it really hard (if not outright impossible) for shitty packages to trample all over the file system and make a total mess of things.

      3. 6

        I absolutely agree that Nix is ideal in theory, but in practice Nix has been so very burdensome that I can’t in good faith recommend it to anyone until it makes dramatic usability improvements, especially around packaging software. I’m not anti-Nix; I reallly want to replace Docker and my other build tooling with it, but the problems Docker presents are a lot more manageable for me than those that Nix presents.

      4. 4

        came here to say same.

        although I have the curse of Nix now. It’s a much better curse though, because it’s deterministic and based purely on my understanding or lack thereof >..<

      5. 2

        How is it better to run a service as a normal user outside a container than as root inside one. Root inside a container = insecure if there is a bug in docker. Normal user outside a container typically means totally unconfined.

        1. 7

          No, root inside a container means it’s insecure if there’s a bug in Docker or the contents of the container. It’s not like breaking out of a VM, processes can interact with for example volumes at a root level. And normal user outside a container is really quite restricted, especially if it’s only interacting with the rest of the system as a service-specific user.

          1. 10

            Is that really true with Docker on Linux? I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user. Containerd and Podman on FreeBSD use jails, which were explicitly designed to contain root users (the fact that root can escape from chroot was the starting point in designing jails). The kernel knows the difference between root and root in a jail. Volume mounts allow root in the jail to write files with any UID but root can’t, for example, write files on a volume that’s mounted read only (it’s a nullfs mount from outside the jail and so root in the container can’t modify the mount).

            1. 10

              I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user.

              None of the popular container runtimes do this by default on Linux. “Rootless” mode is fairly new, and I think largely considered experimental right now: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/

              https://github.com/containers/podman/blob/main/rootless.md

            2. 8

              Is that really true with Docker on Linux?

              Broadly, no. There’s a mixture of outdated info and oversimplification going on in this thread. I tried figuring out where to try and course-correct but probably we need to be talking around a concept better defined than “insecure”

            3. 4

              Sure, it can’t write to a read-only volume. But since read/write is the default, and since we’re anyway talking about lazy Docker packaging, would you expect the people packaging to not expect the volumes to be writeable?

              1. 1

                But that’s like saying alock is insecure because it can be unlocked.

                1. 1

                  I don’t see how. With Docker it’s really difficult to do things properly. alock presumably has an extremely simple API. It’s more like saying OAuth2 is insecure because its API is gnarly AF.

        2. 3

          This is orthogonal to using Nix I think.

          Docker solves two problems: wrangling the mess of dependencies that is modern software and providing security isolation.

          Nix only does the former, but using it doesn’t mean you don’t use something else to solve the latter. For example, you can run your code in VMs or you can even use Nix to build container images. I think it’s quite a lot better at that than Dockerfile in fact.

        3. 2

          How is a normal user completely unconfined? Linux is a multi-user system. Sure, there are footguns like command lines being visible to all users, sometimes open default filesystem permissions or ability to use /tmp insecurely. But users have existed as an isolation mechanism since early UNIX. Service managers such as systemd also make it fairly easy to prevent these footguns and apply security hardening with a common template.

          In practice neither regular users or containers (Linux namespaces) is a strong isolation mechanism. With user namespaces there have been numerous bugs where some part of the kernel forgets to do a user mapping and think that root in a container is root on the host. IMHO both regular users and Linux namespaces are far too complex to rely on for strong security. But both provide theoretical security boundaries and are typically good enough for semi-trusted isolation (for example different applications owned by the same first party, not applications run by untrusted third parties).

    13. 1

      Reminds me of my friend who works as tech support in a hospital. From her I have gathered two main facts:

      • Medical professionals are usually idiots, and
      • It should be a capital offense to force quarterly password changes upon cleaning staff who have zero real need or desire to touch the medical IT system but still have to use it to log timesheets

      Ah, the dichotomy of human existence.

      1. 14

        I think you’re being unreasonable toward medical professionals. I think the points could be more succinctly stated as:

        • People are usually idiots (including IT professionals like us commenting on this)

        • It should be a capital offense to force quarterly password changes

        1. 4

          Yeah, I can get behind that.

      2. 9

        Medical professionals are usually idiots

        In my experience, it is never productive (but possibly cathartic) to label people as idiots.

        Computer people often seem to have a hard time understanding this, but just because someone isn’t a computer expert doesn’t make them an idiot. They have better things to do than trying to work a stupid piece of machinery, and as you can read elsewhere in these comments, they have to do so under a tremendous amount of stress and often while sleep deprived.

        They might be a “computer idiot”, but by that standard I’m a total idiot in most non-computer fields. And I suspect a lot of us are.

      3. 1

        One would think a system could be designed wherein someone whose role requires one simple set of permissions (view and modify one’s own timesheet) would be exempt from the normal security standards of rotating passwords because the permissions of the role simply can’t result in any kind of harmful system breach.

        1. 12

          There is absolutely no reason to use passwords in the hospital setting at all. Everyone should just use a card that is unlocked with a PIN code for the next 12 hours or until the person clocks out. Then the person should be able to move freely between the terminals.

          1. 8

            I’m smiling a little bit at the history you reminded me of. https://en.wikipedia.org/wiki/Sun_Ray These units had smart card-based authentication and transparently portable sessions. You walk away from a terminal and take your card with you, plug it into a new terminal, and carry on working on whatever you were doing. Virtually zero friction other than network latency sometimes making the UX a bit laggy. In practice it worked really nicely the vast majority of the time.

            1. 5

              I used these briefly in Bristol in 2001 through a friend from the Bristol and Bath Linux Users Group, it was really impressive, using tmux sometimes reminds me of playing with those Sun boxes.

            2. 2

              Most of the systems are web-based nowadays, it would be trivial to make the sessions portable or even multi-headed.

              1. 3

                Apologies that it took me a while to reply. How does web-based make sessions portable? It seems to me, without having put a ton of though into it, that being browser-based makes portability significantly more challenging. You can’t use a shared client environment (i.e. you need a separate Windows/Linux/OSX/whatever account for each user) and you’ll still have to manage security within the confines of a browser environment. I agree that it makes it really easy to access content from everywhere, but you’re also dealing with multi-layer authentication (Windows login, browser login, cookie expiration times, etc).

                With a SunRay you could pull your smartcard, walk to the other end of the building, put your smartcard in, and have everything magically pop back up exactly how you’d left it.

                1. 3

                  Ah, right. I was assuming that at this point the software stack runs whole in the browser and the workstations would be basically kiosks.

                  Some equipment has a dedicated computer (radiology basically), but since there usually are only so many operators and the software might not even support multiple system users, shared OS session would probably work fine in practice.

                  But yeah, we might just as well just launch a container on a shared server for every user and let them run a lightweight desktop over spice.

                  1. 3

                    At first blush I read your comment and was pretty concerned about the overall security model, but instead of brushing it off I did a bit of digging.

                    It looks like ChromeOS could potentially support both what I was proposing and what you’re proposing:

                    https://support.google.com/chrome/a/answer/7014520?hl=en&ref_topic=7015274&sjid=2940490462416568896-NC

                    https://support.google.com/chrome/a/answer/10038005

                    It looks like you can configure ChromeOS to use a smartcard for both login authentication and SSO inside the browser, as well as being able to use HTTPS Client Certificates from the card. It’ll also forward those credentials along to Citrix (gross, but common in healthcare) and VMware for remote desktop sessions. Very cool!

                    1. 2

                      It doesn’t really matter.

                      There are so many users and their work is so important to us that they should and mostly could afford a solution tailored to their needs even if it meant a custom operating system. Which it most probably doesn’t - as you’ve found out.

                      Instead we are trying to shoehorn them into environments designed for office clerks which just doesn’t cover all the use cases and will always cause friction.

                      And it’s not just login method. A person dear to me who works in the best hospital in my small country has caused trouble to the IT department a year or so back. Apparently there is a mandatory field in the patient record. Well, if you don’t know at that point in time, you just enter 555555/5555 doctors figured. So a doctor filed a new patient, my dear person figured there is a duplicate record and the patient isn’t new afterwards so they merged the record and whoops, couple thousand patients with their 555555/5555 has gotten merged together. Yay!

                      There is a concept from manufacturing described e.g. in 5S, but readily described by any traditionally brought up woman as well. You should declutter and not just that. You should actively build decluttering infrastructure. That means e.g. making it possible to omit fields that are required from the process point of view, allow the operator flag records for review and describe the discrepancies, add screens for dealing with such records and so on.

                      Instead of trying to make it impossible to enter data on behalf of a doctor, only leading to credential sharing and obscuring of who did what, make it possible and simplest under the name of the actual person and hold thrm accountable by letting the doctor review such operations e.g. next morning and so on.

                      Also, when there is a terminal to help e.g. carry out a surgery or a scan or enter data about patient inside the bed or in the room, you probably don’t want your desktop. You want to log into the context specific terminal and just access or enter data under your name. So remote desktops might be useful in some cases, but not all of them.

        2. 3

          You’d think so, yes.

    14. 1

      In Lisp-like languages and Ruby, there’s less differentiation between expression and statement, if and case have a result value. So there’s no need for ternary operations. This makes for a smoother experience - when you want to make an expression conditional, you just wrap it in if and it can stay where it is.

    15. 2
      • VIM/NeoVim
      • Go
      • Hermit (which I wrote, but I am still very thankful for - 90% of the convenience of Nix with 10% of the pain)
      • ripgrep
      • sqlc
      • Kitty (the terminal emulator, not the SSH client)
      • OrbStack
      • HazeOver
      • Bartender
      • Tuple
      • 1Password 7 (not 8)
      • BetterTouchTool
      • Amphetamine (the Mac app, not the drug)
      1. 1

        Hermit (which I wrote, but I am still very thankful for - 90% of the convenience of Nix with 10% of the pain)

        This makes me curious (I love Nix for what it can do but also hate it because of ~everything else), but I can’t find it (also checked your site and GitHub profile). Do you have a link?

        1. 4

          I didn’t link to it from my comment as it seemed a bit self-promotional, but it’s here.

    16. 2

      MIT Scheme. I’ve been using it since 1984, and it keeps getting better.

      1. 2

        I’m curious: what kind of things do you use it for?

        1. 1

          My personal web site, including my blog; my address book; my calendar; my podcast player; my bookmarks manager; etc. All of my apps are based on an in-memory graph database I wrote on top of MIT Scheme. It’s a great environment for exploring and learning ideas incrementally.

          1. 1

            Very cool! It’s amazing that you’re so productive that you can make small-scale applications for yourself to consume. I’ve always struggled to get anything larger than a few lines of code working for myself, especially nowadays.

      2. 2

        Yes! And thanks to gjs and cph for it!

    17. 4

      I guess emacs org mode and everything that comes with it. I have ADHD and have struggled to take notes or use any organization system. Deciding which folder to put my markdown document in literally is too much of a speedbump for me.

      But 5 years ago I started keeping most of my notes in a single append only org file and it actually works. It’s my Todo list. It’s my Jupyter notebook and executable documentation. It’s my slideshow format (via reveal.js export). It’s my diagramming tool (via Babel and graphviz/dot). It is the only system I’ve been able to stick with for more than a couple of weeks. I don’t even use most of the advanced tagging and agenda features as search backwards/forwards works well enough.

      The only real drawback is it’s mostly publish only. It’s not practical to collaborate on an org mode doc with coworkers. So it remains a personal productivity tool.

      1. 2

        It’s my slideshow format (via reveal.js export).

        There’s also org-present which allows you to go fullscreen and present directly from Emacs!

    18. 12

      Blender was the first open-source project that showed me it was possible to have a valid offering against commercial software.

      In the 90s 3D & VFX software had a heavy divide between free/cheap/tech-demo tools and expensive professional grade packages. As a kid who drooled over RenderMan ads in MacUser, the quoted prices were a glum wake-up call.

      I entered college with my own PC and Blender 1.x had just been open-sourced. I spent several years learning many 3D concepts that I’d only read about in books.

      My senior project used Maya 3 and once I had a full-time job I spent a month’s salary on a Lightwave license–but both times I was quietly asking myself “is this really what professional software is like? both of these are missing things that I already used in Blender years ago”.

      Even though the VFX career never panned out and I still can’t get into the new interface, I’ll still be thankful for Blender. Not just for changing the course of my life, but after 25+ years still letting people get started in 3D for just the cost of hardware.

      1. 3

        is this really what professional software is like? both of these are missing things that I already used in Blender years ago

        That’s funny, it’s the same complaint one often hears from people used to proprietary software (be it MS Office, Photoshop, whatever). I have a very strong feeling this is just “what you’re used to” and maybe in cases of power users (like I suspect you are) actual depth of knowledge of the software’s darker corners (which just means if you’d been using the other tools for long enough, you would have the same reaction going in the opposite direction).

        All of this just to say: I think getting people started with FLOSS tools is a great way to wean society off their addiction to proprietary software. Companies like MS, Apple, Adobe and Cisco realise this of course, which is why they supply stuff at great discounts to schools and provide ready-to-use teaching systems like series of textbooks.

    19. 24

      I use lots of software, but most I merely tolerate. But there is some software I actively appreciate:

      • Magit
      • Paredit
      • i3
      • PostgreSQL
      • CHICKEN Scheme
      • khal
      • mutt
      • irssi
    20. 2

      I don’t own a Playdate, but the game looks like a lot of fun, going off the short animation!

      1. 2

        OMG it looks like SO much fun! I own a Playdate but I haven’t used it much (I bought it to hack on) but this might change that!

        1. 3

          The PlayDate is basically the perfect console for me. I only ever play on portable systems, and other than Nintendo, all my favorite games are quirky indie whatevers. I think it was Zach Gage who said on a podcast that the PlayDate is like a fantasy console like Pico8 or whatever, except that it has actual, dedicated hardware. It’s been a lot of fun for me.