1. 25

    I have watched the videos behind this text and I’m a bit frustrated. The most problems they have are either hardware problems or problems because they expect thinks work like on Windows (or believe they work on Windows).

    For the hardware the somehow acknowledge that this is more the problem of the vendors then of Linux. It still sounds the most time more like Linux is bad because this super fancy hardware don’t work. Yes I know the problems behind this are complex and as a normal user this is frustrating.

    And of course they expect a Windows like behavior, they have used Windows for years. What bugs me is that they claim that the Windows way is the better way without understanding what the problem is. There are two examples for this:

    First the Linus broke his Pop!_OS installation while he tried to install steam. This was because the steam package had a dependency problem which could only resolved by removing essential packages. The GUI tells him there was an error with some suggestions what might case the problems and output from apt hidden behind a details button. He reads out loud: “Warning: you trying to remove the following essential packages”. So he googled and found the command line to install steam. So the command prompted him a lot of text and at the end following two lines:

    You are about to do something potentially harmful

    To continue type in the phrase ‘Yes, do as I say!’

    So he typed in “Yes, do as I say!” and his installation was broken. He claimed later: “the thinks that I did are not entirely ridiculous or unreasonable”. He ignored all warnings and “dictated the computer” “Yes, do as I say!”, how is this not a clear user error[0]?

    So lets look what would had happen with a similar issue under Windows. First so similar we don’t get the issue, because under Windows there is no package manager accessible for somehow third party software. So lets assume there is an windows update which removes the wrong file and breaks your system. On the install the update would remove the wrong file and breaks your system. Other example the steam installer manage to have a bug with removes some necessary files from your Windows installation. Is there anything Windows protect you from this bug[1]?

    It’s late and the other stuff about the file exertion issue I might write tomorrow.

    [0] Of course this has also something to do with the spirit of some developers to create popups/warnings/info messages. Which leads users to ignore these messages.

    [1] I don’t know, but a few years ago windows installers where just executable which required to run as administrator.

    1. 31

      And of course they expect a Windows like behavior, they have used Windows for years

      I think the “Windows-like behaviour” in this case is that on Windows Steam works perfectly, you don’t have to think about installing it, there’s no chance it’s going to break your OS, nor will you have to you choose between installing an application you want and having a working OS.

      We could imagine a hypothetical Steam bug that somehow wrecks Windows installations, but in reality those don’t exist.

      1. 4

        I think those kinds of comparisons don’t work very well, because of the range of options. For the Steam installation issue, on windows you basically have two options: you install it and it works or it doesn’t. In Linux you have the same two options + playing around with various tweaks and installation methods.

        If we were going with a typical user windows-like approach, he’d declare it a failure after Steam failed to install from the default source. Going further with other solutions is both a good thing because it’s possible and a bad thing, because newbies get into a situation like a broken desktop. So once you start going past the basics it’s really on the user to understand what they’re doing. Otherwise it’s comparable to “I messed with some windows DLLs / registry trying to get Steam to work, despite warnings and now it doesn’t boot” - but that’s just not something average users do.

        1. 14

          on windows you basically have two options: you install it and it works or it doesn’t

          On Windows you install Steam and it works. Installing Steam and it not working isn’t really an experience people have with Steam on Windows.

          In Linux you have the same two options + playing around with various tweaks and installation methods.

          I guess? But the Linux (Pop!_OS?) equivalent of “I messed with some windows DLLs / registry trying to get Steam to work, despite warnings and now it doesn’t boot” is [0] kind of the only experience that was available? It seems like there was no way to install it and have it work, or even install it and have it just not work. The only way to install it broke the OS?

          [0] Disclaimer: I didn’t watch the videos, so I’m going off my understanding of the comment I originally replied to

          1. 12

            Installing Steam and it not working isn’t really an experience people have with Steam on Windows.

            Not just that but you actually do have a lot of tweaks to play around with. They’re not common knowledge because it’s incredibly rare to need it in order to get something like Steam working. You don’t really need them unless you’re developing software for Windows.

            I had this “it’s a black box” impression for a long time but 10+ years ago I worked in a Windows-only shopped that did a lot of malware analysis and the like. It’s quite foreign, since it comes from a different heritage, but the array of “power tools” you have on Windows is comparable to that of Linux. The fact that typical users don’t need them as frequently is a good thing, not an evil conspiracy of closed-source vendors to make sure you don’t have control over your hardware.

            1. 3

              Installing Steam and it not working isn’t really an experience people have with Steam on Windows.

              That’s a bit hard to quantify, but sure they do. Just search for “Steam won’t start” or “steam installer fails” on Reddit or their forums. It’s also common enough for many SEO-spam sites to have listicles for that phrase that are actually steam-specific.

              And my point was that this wasn’t there only experience available. The alternative was not to type “yes I’m sure I know what I’m doing” (or whatever the phrase was) when he did not. He went out of his way to break the system after the GUI installer refused to do it. I think you really should watch they fragment for the discussion context.

          2. 3

            there’s no chance it’s going to break your OS

            Presumably because it’s the primary platform they test for.

            1. 1

              there’s no chance it’s going to break your OS

              Of course with a simple installer (copy all files to a directory and add an entry to the windows registry) it’s quite hard to have a bug with breaks your OS. But a simple installer don’t have the features a package management system, i.e. central update mechanism. I don’t want to say package manager are better then the installer way used on Windows[0]. The problem I have with this case it’s not he has clicked some random button and then everything was broken. He has read the error, ignored all warnings and typed the prompt char by char and then wounder why it’s goes wrong.

              I don’t say the UI[1] is perfect. The problem I have is this “I ignore all warnings and complain if it goes wrong” mentality[2]. apt is not a program witch bugs you with unnecessary questions or warnings. Install a package only ask for conformation if it does more then only install the requested package. The annoying conformation question is there only if you try to remove essential packages and is designed to give enough hassle to bring the user to question about this commands.

              [0] I think systems with package manager are better, but this is not the point of the comment

              [1] The error message in the GUI and the handling in the command line

              [2] Yes some (or most) users don’t want to understand error messages, but shouldn’t they not stop at the error and look for (professional) help? And no copy paste a command from a random blog post is not help, if you don’t understand the error or the blog post.

            2. 21

              The entire point of Linus’ challenge is that desktop Linux is full of barriers and traps for new users who don’t (yet) know what they’re doing.

              Explaining “well, it’s like that because you told it to florb the waggis instead of confeling rolizins, so it’s all your fault” may very well be technically correct, but it doesn’t change the fact that the OS hasn’t worked well for the user. “I want to install Steam in 5 minutes without learning about package sudoku solvers, or bricking my computer” is an entirely reasonable use-case.

              Web dev community had a reckoning with this, and thinking has changed from “users are too stupid to understand my precious site” to “all my new users know only other sites, so I must meet their expectations”. If Linux wants to get new users it needs to be prepared for users who know only Windows, macOS, or even just Android/iOS.

              1. 3

                Explaining “well, it’s like that because you told it to florb the waggis instead of confeling rolizins, so it’s all your fault” may very well be technically correct, but it doesn’t change the fact that the OS hasn’t worked well for the user. “I want to install Steam in 5 minutes without learning about package sudoku solvers, or bricking my computer” is an entirely reasonable use-case.

                That’s well and good, but there is a perfectly good fast path for this; install Pop!_OS or Ubuntu on a day where there’s not a bug in the packaging system, which is the vast majority of all days. Yep, it sucks that there was a bug, but that’s simply not going to affect anyone going forward - so why are LTT giving advice based on it?

                1. 11

                  For every distro D there exists a problem P that is solved in a distro E.

                  That endless cycle of “then abandon your whole OS and install a completely new one” thing is another annoying problem “Linux desktop” has. It’s not any single distro’s fault, but it’s a pain that users need to deal with.

                  In my case: I want to use Elementary, but I hosed it trying to update Nvidia drivers. So I was told to switch to Pop!_OS — they do it right. But this one gets stuck seemingly forever when trying to partition my disk, presumably because of the combination of NVMe and SATA that I have. Manjaro worked with my disks, but I’ve run into bugs in its window manager, which wouldn’t be an issue in Elementary. I still use macOS.

                  1. 5

                    For every distro D there exists a problem P that is solved in a distro E.

                    Right, I agree that in general this is a problem; we need better ways to integrate the best ideas from multiple projects.

                    But for the problem stated, which was “I want to install Steam in 5 minutes without learning about package sudoku solvers, or bricking my computer”, Pop!_OS or Ubuntu are the way to go. Your problem is not that; it’s “I want Pantheon and a fast Nvidia card,” and Nvidia have intentionally made that harder than it needs to be.

                    To be totally clear, I’m under no illusions that every user can simply pick up a free desktop and be on their way, but I think it’s pretty unhelpful to cultivate a discourse which simultaneously says “Users should have a fast path for these common use cases” and “Users should be able to get whichever window manager, packaging system, and customizations they want.” Those are both valuable goals, but the former inherently precludes the latter, especially in a world where some hardware companies, like Nvidia, are actively hostile to free desktop projects.

                2. 2

                  I switched from windows to Mint a couple of years back for gaming, in a similar experiment to this one (only not so public). I had no issues at all, steam was in the official applications, it installed with one click. Every game that steam claimed worked on linux did work. There were issues with my non-standard multi-monitor set up (there were issues with this in windows too, but they were worse in linux*) but nothing that prevented playing the games. It was only once I enabled the steam beta program which sets steam to attempt to open all games in wine that I had to get down in the weeds with configuring stuff and some things didn’t work. Steam has pretty clear warnings about this when you turn it on though.

                  I feel like for a tech tips site those guys are pretty non-technical. I never really watched their stuff anyway but now it seems like they should be calling me for help (and I am pretty noob when it comes to linux). This is the biggest criticism for me of this whole experiment. If these guys are an authority on computer tech informing users, they should simply be better at what they do. It is almost like they are running an investment advice channel and going ‘oh no I lost all my money investing in a random startup, guys don’t do the stockmarket it’s broken’. They should be informing people interested in linux what to do and what not to do, and if they are not qualified to do that they should state that and recommend alternatives sources of advice.

                  *I have a suspicion most of these issues were on the application level not the OS level. Games were probably getting the monitors list from the wrong place. Ironically once I set my monitors up in the way that the developers on both windows and linux were expecting me too, the problems on linux disappeared, but a few small issues persisted on windows.

                  1. 1

                    The entire point of Linus’ challenge is that desktop Linux is full of barriers and traps for new users who don’t (yet) know what they’re doing.

                    I have understand this. The problem I have with some of the complains is they proclaim the one or other way is clear better during the challenge. This is a bit more obvious in the file extension example[0]. I completely understand the experience is frustrating. But the “it is frustrating for me because the system don’t behave like I expect from the knowledge of an other system” don’t mean this system is bad.

                    Yes systems try to adopt behavior from other systems to make it for users better to adopt. But this has it down side, because you can’t change a bad design after the user get used to it. In this example users get used to ignore errors and warnings and just “click ok”.

                    Explaining “well, it’s like that because you told it to florb the waggis instead of confeling rolizins, so it’s all your fault” may very well be technically correct, but it doesn’t change the fact that the OS hasn’t worked well for the user

                    I don’t want to imply they are dump or just don’t want to learn the system. It is frustrating, if a system don’t work the way you expect. I would like to see a discussion after the challenges with an expert explaining why the UI behaves different.

                    [0] Which I don’t write today, it’s late again

                    1. 5

                      When doing usability evaluations, it’s normal to discount specific solutions offered by frustrated users, but never the problems they face.

                      There were a few problems here:

                      • Lack of hardware support. Sadly, that’s a broad problem, and difficult to fix.

                      • User needed to download and run a script from GitHub. I think distros could improve here. For a random developer who wrote a script, it’s difficult to distribute the code in a better way. There’s a very high bar for getting something to be an official package, and hardly any other viable alternative. There are several different packaging formats (some tedious to build, some are controversial), a few unofficial package repositories, a few “app stores” for some distros. All this fragmentation is a lot of work and maintenance headache. It makes just dumping a script on GitHub very easy and attractive in comparison. It may not be a failing of any single person or distro, but it is a failing of “Linux desktop” in aggregate.

                      • Browser and file manager did a crappy job by treating HTML with .sh extension as if it was a totally normal thing a user may want to do. The fight about file extensions has been lost in the ‘90s. I’ve been there, tweaking detection by magic bytes in my Directory Opus on AmigaOS, and fiddling with creator codes when copying stuff from classic MacOS. The reality is that file extensions exist, and are meaningful. No normal person stores web pages as “.sh” files.

                    2. 1

                      As a waggis / rolizins engineer, maybe I’m out of touch, but I don’t think “Doing this will cause everything to break. If you want everything to break, then type ‘Yes, please cause my computer to break!’” is quite as obscure a message as anything about florbing and confeling. This required not only a (very rare) bug in the dependency tree but also either a user that deliberately wanted to break his Linux install for YouTube content, or one that is the very embodiment of Dunning-Kruger.

                      Not only did the dependency tree break, but the package manager was smart enough to recognize that the dependency tree had broken, and stopped him from doing it and told him so. He then went out of his way and used another package management tool to override this and allow him to break his installation anyway. This tool then was also smart enough to recognize the dependency tree was broken, and again warned him what was about to happen. He read this message and copied the text from this warning into a confirmation prompt.

                      He could just as easily have typed sudo rm -rf /usr. He could just as easily have deleted system32 on Windows.

                      The only possible solution that could have prevented him from doing this would be to not tell him his own sudo password and to give him a babysitter to do everything requiring privilege escalation for him so he doesn’t hurt himself, but that solution has logistical issues when you try to scale it up to every desktop Linux user.

                      1. 3

                        You need to have more empathy for the user.

                        The prompt wasn’t “destroy my system”, it was “do as I say”, and user said to install Steam.

                        No other operating system is stupid enough to delete itself when you tell it to add a new good application from a reputable publisher. Crappy Windows installers could damage the OS, but Steam doesn’t.

                        It’s normal for OSes to sound dramatic and ask for extra confirmation when installing software from unusual sources, so the alarming prompt could easily be interpreted as Linux also warning about dangers of “sideloading” a package, which can be dismissed as “I’m not installing malware, just Steam, so it’s fine”.

                        From user perspective the screen contained “Install Steam, wall of technogibberish user didn’t ask nor care for, type ‘Yes, do as I say!’”. The system frequently requires to type weird commands, so it requiring to type one more weird command wasn’t out of ordinary.

                        The only possible solution… [condescending user blaming]

                        The real solution would be for Linux to work properly in the first place, and actually install Steam instead of making excuses. Linux is just an awful target for 3rd party applications, and even the other Linus knows this.

                        1. 2

                          No other operating system is willing to give the user the ability to break the desktop environment intentionally (though I recall a lot of Windows bugs in the past that did this unintentionally). One of the fundamental problems Linux faces is that most users don’t actually want as much power as running as root gives you. They’ll say the do, but they really don’t, and their operating system choice generally reflects that.

                          It’s normal for OSes to sound dramatic and ask for extra confirmation when installing software from unusual sources

                          This is pretty obviously because it’s axiomatically impossible for the OS to actually tell if something the user does with sufficient privileges will break something (inter alia, you’d have to be able to solve the Halting Problem to do this). In this case, the package manager was obviously correct, which should be applauded. There are two obvious responses to this (maybe there are non-obvious ones I’m missing as well): restrict the user’s ability to do things to actions with a low likelihood of breaking the OS or trust the user to make a decision and accept the consequences after a warning.

                          Broadly, Windows, the MacOSs, and the mobile operating systems have been moving towards restricting the user’s ability to do risky things (which also includes a lot of things proficient system operators want to do). That seems to be in response to consumer demand but I don’t think that we should enshrine the desires of the bottom 60% of users (in terms of system operation competence) as the standard to which all systems should be designed. This is not related to an “it should just work” attitude towards 3rd party software as there’s generally been a significant decrease in things like OS API stability over the past two decades (e.g. this rant of Spolsky’s). Users just think that anything they want to use should “just work” while anything they don’t care about should be marginally supported to minimize cost: the problem is that many people want different things to work.

                          On the other hand, some users don’t want the operating system reasoning for them (at least some of the time). I don’t want an operating system “smart” enough to prevent me from doing something stupid on my project boxes or in a VM I’m playing with especially if it’s just something that looks stupid to the OS but I’m doing for some (moderately good) reason.

                          1. 4

                            You’re boxing this into a dichotomy of restricting user or not, but this isn’t the issue here.

                            The issue is not about power, but about usability. You don’t need to block all failure paths. You need to communicate clearly and steer users towards success paths, so they never even come close to the dangerous commands by accident.

                            1. 2

                              I wouldn’t say this is really about power, so much as control though I tend to be a bit of a pedant about defining “power”.

                              I think the communication here was reasonably good, though it could be improved. I think the real mistake Linus made was in choice of distribution. That is a real problem in the Linux community (and I think the one we should be focused upon here). I think the opportunity to improve communication here is marginal at best.

                          2.  

                            You need to have more empathy for the user.

                            I do. I’m just saying that there’s nothing anyone could have done to prevent this except disallow even the root user from uninstalling xorg, and even then he could have just manually removed crucial files if he felt like it. OS maintainers are going to make mistakes occasionally. “Just don’t make mistakes ever” isn’t a viable strategy for avoiding things like this. What is a viable strategy is to build tools that detect and correct for errors like the one in Pop!_OS’s dependency tree. And that’s exactly what happened. He just disregarded the numerous protections from this bug that his OS afforded him.

                            “From the user perspective,” the screen contained a list of packages that were about to be installed, a list of packages that were about to be uninstalled, and a message saying that the packages that were about to be uninstalled were essential to the operation of his computer and he should stop now rather than electing to uninstall those packages, along with a prompt that very deliberately requires you to have read that warning in order to proceed.

                            The real solution would be for Linux to work properly in the first place, and actually install Steam instead of making excuses.

                            Linux worked properly, apt even worked properly. Pop!_OS’s dependency tree was briefly broken. The package manager then recognized there was something wrong and explicitly told him he was about to uninstall his desktop and that he shouldn’t do it. It wasn’t “destroy my system.” That was me being (generously) 5% hyperbolic. In reality it was a warning that he was about to uninstall several essential packages including his desktop and a recommendation that he shouldn’t do this unless that was what he wanted to do. He was then required to enter a very specific message which was part of that warning, verbatim.

                            Here’s the thing, no operating system has avoiding pushing out a bad or bugged update periodically. What’s great about Linus’s example is that Pop!_OS pushed out a bad update but the error was limited to one package, and the package manager was smart enough to stop Linus from breaking his system, and told him that it had stopped him from breaking his system. Linus then decided to use another tool that would allow him to break his system. This tool too was smart enough to notice that the package system had broken, and prevented him from breaking his system. He then deliberately bypassed these safeties and uninstalled gdm and xorg.

                            What’s crucial to note here is that exactly nobody is making excuses for Pop!_OS — they messed up their dependency tree, yes — but also, this is a perfect example of all of these systems working exactly as intended. The package manager was smart enough to stop him from breaking his system even though the dependency tree was mangled, and he then overrode that and chose to break his system anyway. That’s more than can be said for many other operating systems. The tools he was using detected the error on Pop!_OS’s side and saved him

                            It’s also worth noting that he literally didn’t brick his system, he could have fixed his machine if he’d just installed from the command line the same packages he had just uninstalled. Like, he didn’t actually break his system, he just uninstalled a few packages that were flagged as essential to stop newbies from uninstalling them because it might confuse them if they were uninstalled.

                            1.  

                              Your assertion that nothing could be done is provably incorrect. Alpine doesn’t have this problem — by design — and it isn’t any less capable than Debian family. It’s a matter of design of tools’ UI, and this part of apt is a poor design.

                              People don’t accidentally uninstall their OS when installing Steam on other OSes, because everywhere else “install a new user program” and “catastrophically alter the whole system” are separate commands.

                              Users generally don’t read walls of text. In usability circles this is accepted, and UI designers account for that, instead of wishing they had better users. Users aren’t illiterate, they just value their time, and don’t spend it on reading things that seem to have low value or relevance. The low signal-to-noise ratio of apt’s message and surprising behavior is apt’s problem, not user’s reading problem. And “this is just the way the tool works” is not a justification for the design.

                      2. 20

                        At the risk of being that condescending Linux user (which would be pretty awful since I’m not really using Linux anymore) my main takeaway from these videos is “don’t use hipster distros”.

                        Or, okay, hipster distros is where innovation happens. I get it, Gentoo was a hipster distro when I started using it, too. Okay, maybe don’t recommend hipster distros to beginners?

                        I saw Manjaro mentioned here. I tried Manjaro. It’s not a beginners’ distro. It’s great if you’re a burned out Arch user and you like Arch but you already know the instructions for setting up a display manager by heart and if you have to do it manually again you’re going to go insane. There’s a (small!) group of people who want that, I get it. But why anyone would recommend what is effectively Arch and a rat’s nest of bash scripts held together with duct tape to people who wouldn’t know where to begin debugging a broken Arch installation is beyond me. I mean the installer is so buggy that half the time what it leaves you with is basically a broken Arch installation for heaven’s sake! Its main value proposition is in a bunch of pre-installed software, all of which can be trivially installed on Ubuntu.

                        I haven’t used Pop!_OS but IMHO a distribution that can’t get Steam right, Steam being one of the most popular Linux packages, is just not a good distribution. It’s particularly unsettling when it’s a distro that’s supposed to have some level of commercial backing, and Steam is one of the most popular packages, so presumably one of the packages that ought to get the most testing. Hell even Debian has instructions that you can just copy-paste off their wiki without breaking anything. And the only reason why they’re “instructions”, not just apt install steam, is that – given their audience – the installation isn’t multilib by default.

                        There’s certainly a possibility that the problem here was in the proverbial space between the computer and the chair, sure. But if that’s the case again, maybe it’s just time we acknowledged that the way to get “better UX” (whatever that is this year) for Linux is not to ship Gnome with the umpteenth theme that looks like all other theme save for the colors and a few additional extensions. It’s safe to say that every combination of Gnome extensions has already been tried and that’s not where the magic usability dust is at. Until we figure it out, can we just go back to recommending Ubuntu, so that people get the same bad (I suppose?) UX, just on a distribution with more exposure (and, thus, testing) and support channels?

                        Also, it’s a little unsettling that the Linux community’s approach to usability hasn’t changed since the days of Mandrake, and is still stuck in the mentality of ESR’s ridiculous Aunt Tilly essay. Everyone raves about consistency and looking professional. Meanwhile, the most popular computer OS on the planet ships two control panels and looks like anime, and dragging things to the thrash bin in the second most popular OS on the planet (which has also been looking like anime for a few years now) either deletes them or ejects them, which doesn’t seem to deter anyone from using them. Over here in FOSS land, the UI has been sanitized for consistency and distraction-free visuals to the point where it looks like a frickin’ anime hospital, yet installing Steam (whether through the terminal or the interface it makes no difference – you can click “Yes” just as easily as you can type “Yes”) breaks the system. Well, yeah, this is what you get if you treat usability in terms of “how it looks” and preconceived notions about “how it’s used”, rather than real-life data on how it’s used. It’s not an irredeemable state of affairs, but it will stay unredeemed as long as all the debate is going to be strictly in terms of professional-looking/consistent/beautiful/minimal/distraction-free interfaces and the Unix philosophy.

                        1. 14

                          The issue about Linux distro here is that they didn’t know the differences between them, why that matters, and that Linux isn’t one thing. Without a knowledgeable person to ask what to use, this is how they ended up with these different flavours. They also didn’t know about desktop environments, or how much influence they have over their Linux experience.

                          It’s unfortunately a hard lens for many technical people to wrap their head around. Heck, we are starting to see people that don’t need to interact with hierarchical file systems anymore. Something natural to everyone here, but becoming a foreign concept to others.

                          1. 6

                            Certainly. My response was mostly in the context of an underlying stream of “Ubuntu hate” that’s pretty prevalent in the circles of the Linux community that also have a lot of advice to give about what the best results for “best Linux distro for gaming” should be. I know I’m going to be obtuse again but if the l33t h4x0rz in the Linux community could just get over themselves and default to Ubuntu whenever someone says “I’ve never touched Linux before, how can I try it?” a lot of these problems, and several distributions that are basically just Ubuntu with a few preinstalled programs and a custom theme, would be gone.

                            There’s obviously a huge group of people who don’t know and are not interested in knowing what a distribution is, what their desktop environment is, and so on. As the Cheshire Cat would put it, then it doesn’t really matter which one they use, either, so they might as well use the one most people use, since (presumably) their bugs will be the shallowest.

                            I know this releases all sorts of krakens (BUT MINT WORKS BETTER OUT OF THE BOX AND HAS A VERY CONSISTENT INTERFACE!!!1!!) but the competition is a system whose out-of-the-box experience includes Candy Crush, forced updates, a highly comprehensive range of pre-installed productivity apps of like ten titles, featuring such amazing tools like Paint 3D and a Calculator that made the Win32 calculator one of the most downloaded programs in history, two control panels and a dark theme that features white titlebars. I’m pretty sure any distribution that doesn’t throw you to a command prompt on first boot can top that.

                            1. 1

                              Oh, I totally agree, I was just clarifying that they did some googling to try and find something to use, and it’s how they ended up with this mess of difficulties.

                            2. 2

                              I think you cut to the heart of the matter here. I also think the question they asked initially (what’s the “best” gaming Linux distro) wasn’t well formed for what they actually wanted: what the easiest to configure was. To forestall the “that’s a Linux problem” crowd, that’s an Internet problem, not a Linux problem. If you Google (or ddg or whatever) the wrong question, you’re going to get the wrong answer.

                              I think we have to resign ourselves to the fact that users generally don’t want to learn how to operate their systems and don’t want meaningful choices. Therefore, many users are not good candidates for a *nix.

                            3. 2

                              Until we figure it out, can we just go back to recommending Ubuntu, so that people get the same bad (I suppose?) UX, just on a distribution with more exposure (and, thus, testing) and support channels?

                              I wish Ubuntu offered an easier flow for getting a distribution with the right drivers out of the gate. This is what Pop_OS! does (source):

                              Pop!_OS comes in two versions: Intel/AMD and NVIDIA. This allows us to include different settings and the proprietary NVIDIA driver for NVIDIA systems, ensuring the best performance and use of CUDA tools one command away. On Oryx Pro systems, you can even switch between Intel and Nvidia graphics using a toggle in the top right corner of your screen.

                              IMO this is superior to in Ubuntu where you need to follow complex instructions to get NVIDIA proprietary drivers: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia

                              And you need to follow different instructions for AMD graphics

                              Also if you buy a System76 laptop all the drivers for your computer come set up, no driver manager needed. With Ubuntu you can buy from Dell but not with the same variety of hardware as System76.

                              I agree that Ubuntu is a good option but I would like to see it improve in these aspects before I would recommend it to a random non-power user who wants to play video games.

                              1. 2

                                I haven’t used Ubuntu in a while, and that page doesn’t help because the instructions look like they haven’t been updated since Ubuntu 12.04, but the way I remember it all you needed to do was go to “Additional Drivers” or whatever it was called, choose the proprietary driver, hit OK and reboot. Has that changed in the meantime? Last time I used a machine with an NVidia card I was running Arch and it was literally just pacman -S nvidia, please tell me Ubuntu didn’t make it more complicated than that!

                                Also… is the overlap between “people who write CUDA code” and “people who can’t install the proprietary NVidia drivers” really that big? Or is this aimed at people using third-party CUDA applications, who know statistics but suck at computers in general (in which case I get the problem, I’ve taught a lot of engineers of the non-computer kind about Linux and… yeah).

                                Also if you buy a System76 laptop all the drivers for your computer come set up, no driver manager needed.

                                If you choose the “Ubuntu LTS” option when ordering, doesn’t it come with the right drivers preloaded? I mean… I get that Pop!_OS is their thing, but shipping a pre-installed but unconfigured OS is not exactly the kind of service I’d expect in that price range.

                                1. 2

                                  For a novice user, do you expect them to know before they download the OS whether they have an nVidia or AMD GPU?

                                  I seem to recall that a big part of the motivation for the complex install process for the nVidia drivers was the GPL. The nVidia drivers contain a shim layer that is a derived work of the kernel (it hooks directly into kernel interfaces) and so must be released under a GPL-compatible license and of the proprietary drivers, and the proprietary driver itself, which is first developed on Windows and so is definitely not a derived work of the kernel and can be under any license. The proprietary drivers do not meet the conditions of the GPL and so you cannot distribute the kernel if you bundle it with the drivers. The GPL is not an EULA and so it’s completely fine to download the drivers and link them with your kernel. The GPL explicitly does not restrict use and so this is fine. But the result is something that you cannot redistribute.

                                  FreeBSD and Solaris distributions do not have this problem and so can ship the nVidia drivers if they wish (PC-BSD and Nexenta both did). I wonder how Pop!_OS gets around this. Is it by being small and hoping no one sues them?

                                2. 1

                                  From what I can tell, steam isn’t even open source. And while you assert it to be one of the most popular Linux packages, I hadn’t even heard of it until this video came up in all the non-gaming tech news sites despite having used Linux for 25+ years. Was it even a Pop!OS package or were they installing an Ubuntu package on an Ubuntu derivative and assuming it’d just work?

                                  1. 8

                                    it’s proprietary, yeah, but i just feel like someone has to tell you that there are several orders of magnitude more Steam users than Linux desktop users, and it’s not only a package in Pop!_OS and Ubuntu, it’s a package in Debian and just about every distro for the last decade.

                                    i honestly have gotta applaud you for being productive enough a person to have never heard of Steam. if you look at the install data from popularity-contest, ignoring firmware and libraries (i.e. only looking at user-facing applications), Steam is the third most-installed non-free package on all Debian-based distros, behind unrar and rar. pkgstats.archlinux.de suggests Steam is installed on 36% of Arch Linux installations. Steam is not only an official package on Pop!_OS but one of the most installed packages on desktop Linux overall.

                                    1. 5

                                      And while you assert it to be one of the most popular Linux packages, I hadn’t even heard of it until this video came up in all the non-gaming tech news sites despite having used Linux for 25+ years

                                      Someone else already pointed out how popular it is but just for the record, any one of us is bound to not have heard about most of the things currently in existence, but that does not make them pop out of existence. Whether you’ve heard of it or not affects its popularity by exactly one person.

                                      Also, lots of useful applications that people want aren’t even open source, and a big selling point of Pop!_OS is that it takes less fiddling to get those working (e.g. NVidia’s proprietary drivers). An exercise similar to this one carried out with, say, Dragora Linux, would’ve probably been a lot shorter.

                                      Was it even a Pop!OS package or were they installing an Ubuntu package on an Ubuntu derivative and assuming it’d just work?

                                      Most of Pop!_OS is Ubuntu packages on an Ubuntu derivative. Does it matter what repo it came from as long as apt was okay installing it?

                                      Edit: to make the second point clear, Pop!_OS is basically Ubuntu with a few custom Gnome packages and a few other repackaged applications, most of the base system, and most of the user-facing applications, are otherwise identical to the Ubuntu packages (they’re probably rebuilt from deb-srcs). No idea if what they tried to install was one of the packages System76 actually repackages, or basically the same as in Ubuntu, but it came from their “official” channel. I.e. they didn’t grab the Ubuntu package off the Internet, dpkg -i it and proceed to wonder why it doesn’t work, they just did apt-get install steam, so yes, it’s a Pop!_OS package.

                                  2. 10

                                    I mean, I have Big Opinions® on the subject, but my tl;dr is that Linux isn’t Windows, we shouldn’t give false expectations, have our own identity, etc. etc. But….

                                    So he typed in “Yes, do as I say!” and his installation was broken. He claimed later: “the thinks that I did are not entirely ridiculous or unreasonable”. He ignored all warnings and “dictated the computer” “Yes, do as I say!”, how is this not a clear user error[0]?

                                    I mean, the system should refuse to do that. Alpine’s and others refuse to allow the system to enter a boned state. One of the Alpine developers was rightly criticizing Debian for this issue in apt, citing it as one of the reasons why they stopped using Debian. The attention to the problem Linus gave in an embarrassing light was the push finally needed to fix it.

                                    1. 5

                                      Knowing how Internet guides work, now all guides will say “apt --allow-solver-remove-essential <do dangerous stuff> instead of “Type Yes, do as I say at the prompt”.

                                      1. 3

                                        I like luke’s perspective that some distros should do different things. I think it’s reasonable for arch to be a ‘power user distro’ that is willing to bork itself. But PopOS is ‘an operating system for STEM and creative professionals’, so it probably should have some safeguards.

                                        That being said I don’t think arch should ever be recommended to a brand new user. Linus shouldn’t even be on arch because 1) there should be better resources for picking a good distro for absolute beginners and 2) PopOS never should have that broken of a steam package in the first place.

                                        1. 1

                                          That being said I don’t think arch should ever be recommended to a brand new user.

                                          I would qualify this; there are many users for whom arch was there first distro and it went great, but the key thing is these are not your typical computer user; they are people who are technically minded (not necessarily with deep deep knowledge of anything in particular, but they’re probably at least the person their friends ask for help), are up to and interested in learning about the system, and generally have been given some idea of what they’re getting into. That is to say, arch is definitely for “power users,” but that set includes some users who have not actually used Linux before.

                                          For my part, Arch was the first distro that was actually reliable enough for me to do more than play with; I spent a year or so fussing with other stuff while dual booting windows, and Arch is the first one that actually worked well enough for me wipe the windows partition and stay. This was 15 years ago and I haven’t left, though I keep eyeing NixOS these days.

                                          I think at the time folks still had fresh memories of before Linux desktop environments were even a thing, and there was this mentality that the barrier to entry was mostly around user interfaces. People hadn’t really internalized the fact that Linux had caught up decently well even by then (this was KDE 3.x era), but the problem was stuff needed to work better out of the box, and it needed to not break whenever you upgraded to the next release of the distro.

                                        2. 1

                                          I mean, the system should refuse to do that

                                          The system had refused to do that. Then the user has told the system to shut up and do as he said. You could argue that this should not be possible, but if you are in the situation where you have fucked up your packages? The way around should be present within the package manager, because without it you need to do the way around without your package manager by deleting files and changing the database file.

                                        3. 7

                                          To answer some of your questions:

                                          First so similar we don’t get the issue, because under Windows there is no package manager accessible for somehow third party software.

                                          Technically not true; there is a Windows package manager and has been for a long time, and that’s the Windows Installer (MSI files). There’s also APIs and supported methods for 3rd-party installers to register an install with the system. What’s historically been missing are official package repositories for installing and updating applications (ala. APT, RPM, etc… repos). That’s slowly changing with the Microsoft Store, winget, and others, but this is an area Linux has long been very far ahead.

                                          So lets assume there is an windows update which removes the wrong file and breaks your system. On the install the update would remove the wrong file and breaks your system.

                                          This is incredibly rare. I won’t claim it hasn’t happened, but more common (while still very rare) is an update which causes a bluescreen on boot or exposes a bug in a critical system process. In either case, we’re talking very rare, but I’d suggest that’s true of Linux too.

                                          Other example the steam installer manage to have a bug with removes some necessary files from your Windows installation. Is there anything Windows protect you from this bug[1]?

                                          Yes, several things, and this is a genuine major contrast to Linux. Off the top of my head:

                                          1. Window system files cannot be modified by default even with administrator privileges. You can’t simply run an elevated Command Prompt and run the equivalent of rm -rf C:\Windows. That’s because most operating system files are both owned and only writeable by a special account (TrustedInstaller). You can still modify or delete these files, but you have to jump through several hoops. At a minimum, you need administrator privileges (ala. root), and would have to take ownership of the file(s) of interest and subsequently grant yourself the relevant privileges. There are other ways you could gain the relevant access, but the point is it’s not a thing you could do by accident. That’s similarly true for installers, which also would need to take the same approach.

                                          2. Windows has long had numerous recovery options for when things go pear shaped. Notable ones include Safe Mode and its various permutations (since forever), the ability to uninstall operating system updates (also forever), System Restore (since XP?), System Reset (Windows 10?), and a dedicated recovery partition with a minimal Windows 10 installation to serve as a recovery environment wholly independent of the main operating system. Obviously, none of these are a guarantee for recovery of an appropriately damaged system, but it’s long been the case that Microsoft has implemented numerous recovery/rollback mechanisms.

                                          On Linux, it’s usually limited to one or more previous kernel versions, and that’s about it? Yes, there’s single-user mode, but that just drops you into a root shell, which is wholly unsuitable for non-experts to use.

                                          1. 1

                                            there is a Windows package manager and has been for a long time, and that’s the Windows Installer (MSI files). There’s also APIs and supported methods for 3rd-party installers to register an install with the system.

                                            I believe we use the same words for different thinks. When I talk about a package manager I mean a system witch provides packages and resolves dependencies. If I understand your comment correct an MSI file installs Software and registers the Software. But there is no way a MSI file claims is it incompatible with version 3.6 of explorer. So that on install the installer solve the dependence graph and present what he need to install and remove.

                                            On Linux, it’s usually limited to one or more previous kernel versions, and that’s about it?

                                            This depends on your system. On debian based OS there are the packages still in the package cache. So you can easy downgrade. There are other options witch allows easy recovery from such bugs. There are most of the time not setup by default and still require some skill to solve your problem.

                                            1. 2

                                              I believe we use the same words for different thinks. When I talk about a package manager I mean a system witch provides packages and resolves dependencies. If I understand your comment correct an MSI file installs Software and registers the Software. But there is no way a MSI file claims is it incompatible with version 3.6 of explorer. So that on install the installer solve the dependence graph and present what he need to install and remove.

                                              It’s true that MSI (and most competing technologies) generally will not compute and resolve a dependency graph for package installation, but it’s also worth noting this is in part because it’s far less applicable to Windows systems. As the operating system is a single unified system, versus hundreds or even thousands of discrete packages sourced from different projects and maintainers, it’s unusual for an application on Windows to have many dependencies. So in this respect the packaging tools functionality is very much in response to the needs of the underlying platform.

                                              A system with the same sophistication for dependency resolution as the likes of Apt or Yum is simply just not as useful on Windows. Of course, that’s a separate argument from a system which provides a centralised catalogue of software ala. Linux software repositories. That’s an area Windows is very much still playing catch-up on.

                                              This depends on your system. On debian based OS there are the packages still in the package cache. So you can easy downgrade. There are other options witch allows easy recovery from such bugs. There are most of the time not setup by default and still require some skill to solve your problem.

                                              I think we have different definitions of easy here. Typically such an approach would minimally involve various command-line invocations to downgrade the package(s), potentially various dependency packages, and relying on cached package installers which could be removed at any time is less than ideal. Given the upstream repositories usually don’t to my knowledge maintain older package versions, once the cache is cleaned, you’re going to be in trouble. The point I’d make is that if something goes wrong with package installation that breaks your system, on most Linux distributions the facilities to provide automated or simple rollback are fairly minimal.

                                              1. 1

                                                As the operating system is a single unified system, versus hundreds or even thousands of discrete packages sourced from different projects and maintainers

                                                I would doubt that Windows itself has/is no modular system. The Updater itself must also have some sort of dependency management. FreeBSD as an other unified OS is currently working on a package management system for there base system.

                                                A system with the same sophistication for dependency resolution as the likes of Apt or Yum is simply just not as useful on Windows

                                                Why not? Currently all software ships it dependencies on there own and updater have to implemented in all software. Maybe not with one big graph for all software, but with a graph for each installed program and with a duplicate elimination.

                                                1.  

                                                  I would doubt that Windows itself has/is no modular system. The Updater itself must also have some sort of dependency management. FreeBSD as an other unified OS is currently working on a package management system for there base system.

                                                  You’re right, Windows itself is very modular these days, but the system used for managing those modules and their updates is independent of other installers (inc. MSI). There’s some logic to this, given the updates are distributed as a single cumulative bundle, and MS clearly wanted to design something that met Windows needs, not necessarily broader generalised package dependency handling requirements. The granularity is also probably wrong for a more general solution (it’s excessively granular).

                                                  On my system, there’s around ~14,600 discrete components, going off the WinSxS directory.

                                                  Why not? Currently all software ships it dependencies on there own and updater have to implemented in all software. Maybe not with one big graph for all software, but with a graph for each installed program and with a duplicate elimination.

                                                  Several reasons. One is that most Windows software is predominantly relying on Windows APIs which are already present, so there’s no need to install a multitude of libraries to provide required APIs as is often the case on Linux. They’re already there.

                                                  Where there are 3rd-party dependencies, they’re usually a small minority of the application size, and the fact that software on Windows is much more likely to be closed source means it’s harder to standardise on a given version of a library. So if you were to try and unbundle 3rd-party dependencies and have them installed by package manager from a central repository, you’d also need to handle multiple shared library versions in many cases.

                                                  That’s a soluble problem, but it’s complex, and it’s unclear if the extra complexity is worth it relative to the problem being solved. I suspect the actual space savings would be minimal for the vast majority of systems.

                                                  I’m not saying it’s a bad idea, just that it’s solving a problem I’d argue is far less significant than in *nix land. Again, all of this is independent of centralised package repositories, as we’re starting to see with winget, scoop, choco, etc …

                                            2. 1

                                              On Linux, it’s usually limited to one or more previous kernel versions, and that’s about it?

                                              https://documentation.suse.com/sles/11-SP4/html/SLES-all/cha-snapper.html

                                              By default Snapper and Btrfs on SUSE Linux Enterprise Server are set up to serve as an “undo tool” for system changes made with YaST and zypper. Before and after running a YaST module or zypper, a snapshot is created.

                                              1. 1

                                                Excellent. Like ECC RAM, those who are already expert enough to devise ways to do the task are given the tools.

                                                This doesn’t happen on mainstream user- friendliness- oriented distros.

                                                I do wonder about a Nix- based highly usable distribution. All the tools are there to implement these protections, lacking only a general user interface.

                                                1. 1

                                                  I think that’s an unfair summary. Implementing this properly takes time and few distros have even started to default to filesystems where this is possible. It’s coming desktops too: https://fedoraproject.org/wiki/Changes/BtrfsWithFullSystemSnapshots

                                                  1. 1

                                                    Of course it’s coming.

                                                    I still think the criticisms are valid and help drive these technologies arriving for the common user.

                                                2. 1

                                                  That’s pretty cool. I hope it becomes more widely accessible on end-user distributions (I expect SLES is a pretty tiny minority of the overall desktop/laptop Linux userbase).

                                              2. 1

                                                bug

                                                It was a good old package conflict, wasn’t it? The normal way this happens is if you try to install a package from a foreign distro.

                                                Different distros have different versions of packages, so unless the foreign package’s dependencies happens to line up with every installed package, the only way to install the foreign package is going to be to uninstall everything that depens on a conflicting version of something, which can be quite much.

                                                If so, I wouldn’t call it a “bug”, since that’s a term used on software – the package manager itself, not its input. For user expectations, this means that bugs are fixable, whereas package conflicts (at least of the self inflicted kind) are not. The software can only heuristically refuse to do stupid things.

                                              1. 16

                                                I’m kindof wondering if the right way to think about this is not so much an issue of the number or size of packages that are dependencies, but the number of maintainers who are dependencies. Ultimately, whether two independent functions are part of the same package or two different ones maintained by the same person is a fairly shallow question. The micropackage approach is bad mainly in that it makes maintainership harder to understand.

                                                One thing I think both Elm and Go do right is that they don’t hide the maintainer’s name in the dependency; Go just does import by repository path, so you can tell by looking at your dependency list that e.g. all six of those packages are maintained by the same person. Elm denotes packages as user/repo; I’m not a fan of the fact that they tie their package manager to GitHub, but it at least doesn’t hide this.

                                                Almost every other language package manager does this wrong; when you do e.g. pip install foo, there is no indication whatsoever about who that package is coming from.

                                                With distro package managers like apt, it’s okay for these names to be unqualified since the whole repository is curated by the distro maintainers. But in the absence of curation maintainership should be explicit.

                                                1. 3

                                                  With distro package managers like apt, it’s okay for these names to be unqualified since the whole repository is curated by the distro maintainers.

                                                  I would say this is a problem even for distro package managers, at least for “universe”-like repositories. It’s pretty common for a package to disappear from one version of Ubuntu / Debian to the next because the maintainer disappeared and no one else picked it up. That being said, I agree with you in general.

                                                  1. 3

                                                    One thing about Go, you can use any Git host, not just GitHub, and it even works with Mercurial, SVN, etc.

                                                    1. 1

                                                      [maybe it’s] not so much an issue of the number or size of packages that are dependencies, but the number of maintainers who are dependencies.

                                                      I really like this idea. And it seems like something that would be very easy to add to existing package managers (e.g., changing the final output from installed X packages in Y seconds to installed X packages from Y authors in Z seconds.

                                                      But I have a question: do you think the relevant number is the number of organizations that are maintainers or the number of (natural) persons who are maintainers? Your comment seemed to treat these as always being the same, but they are often (very) different. I can see arguments for either, so I’m interested in which you meant.

                                                      1. 1

                                                        I think it makes sense to treat organizations as a single maintainer.

                                                    1. 7

                                                      And best of all, he’s documented it all in detailed blog posts and nearly 50 videos uploaded to YouTube, sharing what he’s learned for others who might follow in his footsteps.

                                                      I actually worry about this. Andreas Kling (with Serenity), Handmade Hero, Bisqwit, etc. Lots of educational content that is in the hands of a single company.

                                                      1. 1

                                                        Did you also worry about blockbuster having all the movies??

                                                        1. 3

                                                          It’s not common, but I have heard multiple accounts of YouTube terminating channels, without warning. One was this morning.

                                                          1. 2

                                                            I actually don’t get your point from that … unless you’re implying that failure of YouTube / Google in the same vain as Blockbuster could lead to the permanent loss of content not stored elsewhere?

                                                            1. 2

                                                              I think the kind of cataclysm that would produce a situation where YouTube suddenly and without warning got completely deleted along with all of its backups would be the kind where a few channels getting deleted is the least of our worries. Maybe I’m just not forward-thinking enough but I have a hard time imagining any of us here outliving Alphabet without something comparable to the apocalypse happening.

                                                              And to the top level comment, it’s not like YouTube owns the content in question. I seriously don’t understand the what the problem is supposed to be. YouTube is just a means of transmission for video content from creators to their viewers. It isn’t, uh, I don’t know, holding that content hostage or whatever is being implied. I’m not even sure what the alternative is supposed to be. Some sort of peer-to-peer decentralized thing where everyone holds some subset of the thousands of petabytes of content on YouTube and shares the load of streaming the exabytes of content people watch per year? There aren’t enough people with high speed internet (and who have the required storage) in the world for that to be feasible and if there were it’d experience incomparable downtimes and constant, massive data loss. Some kind of federated thing would require each instance to have funds that only Fortune 500 companies have, and massive amounts of content would be lost whenever one of these collapsed. I don’t get it.

                                                            2. 1

                                                              Blockbuster predates DMCA.

                                                          1. 7

                                                            I appreciate satire as much as the next guy, and rust champions can definitely be a little overbearing. But this isn’t satire. Absurdist perhaps but none of the code in their looks even remotely like a satire on Rust projects or code. More like a perversion for the sake of absurdity.

                                                            1. 13

                                                              This is - by the very definition of the word - satire.

                                                              1. 4

                                                                You are trying to deny @zaphar their interpretation of ‘satire’ by an appeal to authority. That authority does not exist. There is no agreed upon definition of ‘satire’ to appeal to. Different people consider different things satirical. It’s entirely reasonable to draw a distinction between an ‘absurdist’ piece of art and a ‘satirical’ piece of art and consider this an example of one, but not the other.

                                                                At most we can say that @zaphar is wrong in that this most certainly is considered satire by many people. The mistake that this ‘is not’ satire is possibly due to unnecessarily considering ‘absurdity’ and ‘satiricality’ as mutually exclusive, whereas I at least would argue that absurdity can be a means through which satirical effects can be achieved.

                                                                1. 4

                                                                  I’ll note that perhaps I’ve just missed the bits of Rust that this is intended to satirize. When I looked at the repo all I saw was a bunch of code that looked about as unrecognizable as Rust code as it could be. I literally couldn’t figure out what it was meant to be satirizing since it was so foreign. It was definitely absurd Rust I could see that.

                                                                  1. 3

                                                                    A coworker of mine responded (jokingly) to the repo with, “at least its rust. i will happily dedicate my computer to executing zero cost, semantically moving rust code” so I assume it’s at the very least a jab at that kind of attitude.

                                                                    1. 2

                                                                      In a code review I would send this back with a “Rewrite it so I can read it” comment and nothing else :-)

                                                                  2. 4

                                                                    Apologies for getting off-topic here but I actually really enjoyed parsing this out:

                                                                    Your own argument is fallacious. I have made no claim that a specific authority - be it Merriam-Webster or Oxford or Cambridge - can be said to have the most correct definition. At best, it can be said that I have appealed to English-speakers in general, who generate consensus in the form of media, which is then further documented by a source when the need arises. Dictionaries are one source that acts as an artifact of that consensus (albeit a laggy one). Definitions enter and leave them as consensus changes. They are snapshots which attempt to capture the intended meaning behind a word’s usage. It would be a mistake to consider the snapshots to be prescriptive. They are descriptive of the consensus.

                                                                    Words have definitions. It’s the property by which we are capable of having this conversation in text right now. It’s remarkable that despite your claim that there is no agreed upon definition to appeal to for the meaning of a given word that all of this can occur. There is clearly a mechanism at play here that allows us to understand the words we are saying. It is by consensus. You are mistaken that there is no agreed upon definition of “satire,” as there still remains a set of artifacts that suggest otherwise. The fact that these sources are not in perfect alignment does not diminish the presence of this agreement. The consensus is there and represented in artifacts such as wikipedia, dictionaries, essays, forum posts, and tweets. There are sufficient sources to justify the consensus and - by extension - a common definition.

                                                                    Therefore, I am not making an appeal to authority here, as only by majority rule are words granted definitions. Critically, also, this is not an appeal to popularity, as the definition of a word is a direct manifestation of the majority consensus. It cannot be said that 99 people are using a word “wrong” and the remaining 1 is correct, as it is only by the majority that words are granted their accepted definitions in the first place. The “truth” of a definition cannot be mistaken by a group. The truth of one is granted by it.

                                                                    The general consensus for “satire” is that it can involve exaggeration or ridicule to criticize foolish behavior. Oftentimes it is used in the context of politics, but not always, and this is not required.

                                                                    In summary, I am not denying zaphar’s interpretation using an appeal to authority. I am denying zaphar’s interpretation using majority consensus.

                                                                    1. 2

                                                                      I normally don’t keep these conversations going but this particular one is sort of fun :-D

                                                                      I recognize this as an attempt at satire certainly. It’s just that I’ve personally never encountered the foolish behavior being satirized which weakens the result down to just absurdity rather than effective satire for me. I’ll admit that I’ve almost, if not in fact, entirely worked on or participated in Rust code where none of the supposed satirized behavior existed.

                                                                      In a sense, I think you could say the joke flew entirely over my head.

                                                                2. 3

                                                                  Well it’s not even so much a satire about real Rust code as it’s written by real Rust developers as it is a joke about the attitudes of those in the Rust ecosystem. It’s pretty obvious if you read the README who and what it’s making fun of and why. The code isn’t the focus, although the code is frankly perfectly bad in ways that exacerbate Rust’s worst qualities (its unreadability, its endless and ever-growing pile of syntax and sugar etc, nightly compiler features used by half of the crates you’ll find, sticking metaprogramming where it doesn’t belong, and so on) and what’s stopping big fans of Rust from seeing this are the very attitudes being satirized in the README.

                                                                  Let’s look at an example. Why does the project build so slowly? Well, it’s because Rust’s compiler builds things slowly in general, but more importantly it’s because the project has well over a thousand dependencies. Here’s the thing: the writer didn’t add a thousand dependencies to cargo.toml. If you go check the cargo.toml you’ll see it actually has 84 dependencies. That’s still a lot, but the writer is making fun of the fact that each dependency pulls in on average over a dozen further dependencies. And since Rust takes so long building one project, building all 1061 dependencies takes hours. All of these are complaints people have leveled against Rust in the past, but packaged into a rather absurd but real example. The writer didn’t manually drag in a thousand dependencies to make the project build slowly, they added 84 and let cargo do its thing, dragging in a thousand dependencies all on its own. Then they let that daisy-chain into Rust’s already slow build times to make the project literally unusable.

                                                                  But the joke truly comes in the README where the writer says:

                                                                  🚀 This project is very minimal, it only requires 1061 crates 🚀

                                                                  and

                                                                  Due to the lightweightness of rust(🚀), unlike node_modules being fairly large for few dependencies, rust(🚀) manages compile caches efficiently and stores them to storage to save compile times! Just 33G target folder, the compile time is around 2 hours and 30 minutes on my mac on release mode

                                                                  And here it’s not only making fun of how Rust performs in this case (because it’s very easy to clap back that of course the compiler performs poorly when you intentionally craft your project to make that happen), but taking observations that have been made about Rust projects in general and making them more obvious. Then the README mimics the language and rhetoric that is usually used to argue against those observations but now that there aren’t a hundred dependencies but instead a thousand, and now that it doesn’t take five minutes to build a trivial project but instead several hours, that rhetoric is absurd on its face.

                                                                  That’s just one of the many jokes and critiques being made here.

                                                                  1. 2

                                                                    I guess I just haven’t interacted with the projects that have this explosion of crates so I don’t get the joke. Which is fine. I’m sure there are Rust projects that exhibit the excesses being made fun of here. I just haven’t experienced them.

                                                                    1. 1

                                                                      Yeah, this is funny to me but it’s — and I don’t know how else to word this — very, very online. Some of it only makes sense in the context of the eternal nonsense culture war over Rust that’s continually being fought in every corner of the internet.

                                                                1. 4

                                                                  Personally, I’ve never liked the advice that writing obvious comments is bad practice—probably because I write obvious comments all the time.

                                                                  This is fun and games while it is correct but when you run across a comment that is claiming the opposite of what the code does, what now?

                                                                  // Add a horizontal scroll bar
                                                                  newScrollBar = new JScrollBar(scrollBar, VERTICAL);
                                                                  

                                                                  What’s the correct behaviour? Should you fix the comment? Should you fix the code? Should you leave it alone if you’re not directly affected by it? With just the code the problem isn’t really there since there is one source of truth. It might not be correct but at least there’s no contradictions because the code does what the code says it does.

                                                                  The problem with comments is, that they’re non-executable, so the person changing the code changes the code otherwise the change will not happen. But will they remember to change the comment? Maybe, maybe not. This isn’t even a hypothetical case, I’ve seen cases where the comment claimed the exact opposite behaviour than what the code did.

                                                                  1. 5

                                                                    What’s the correct behaviour? Should you fix the comment? Should you fix the code? Should you leave it alone if you’re not directly affected by it? With just the code the problem isn’t really there since there is one source of truth. It might not be correct but at least there’s no contradictions because the code does what the code says it does.

                                                                    I’ve found those cases incredibly useful, because they tell me the code diverged. Something changed; why? Does other code assume the scroll bar is horizontal? Was that code switched when this snippet was switched to vertical? Does this code/comment divergence help me track down the bug I’m searching for?

                                                                    Without the comment I wouldn’t know that something interesting happened and wouldn’t even think to git blame it, but now I know that there’s historical information here.

                                                                    1. 3

                                                                      I’ve found those cases incredibly useful, because they tell me the code diverged. Something changed; why?

                                                                      Usually, because I slipped as I was typing the comment. I use the wrong word all the time when speaking and writing, and need either some testing or a careful reader to notice.

                                                                      There’s nothing insightful to be gained from my thinkos and brain farts: they’re also in my multi-paragraph expository comments, but at least there you usually have enough context to realize “oh, Ori is a sloppy writer, and clearly meant this”.

                                                                      1. 2

                                                                        The thing is: for the most part, 99% of the cases, this is because the code was changed but the comment wasn’t (very much according to Occam’s Razor) so what you get out of this is wasting your time with needless investigation of something that’s not actually a problem and then fixing the comment or just removing it outright.

                                                                        1. 2

                                                                          If you are certain the code is right and know the comment is wrong, then take 30 seconds to fix the comment to match the code and move on with your life. Not a big deal.

                                                                          If you are uncertain whether the code is wrong or the comment is wrong, then that indicates a hole in your understanding. That hole would exist whether or not the comment was there. So thats a useful indicator that you should investigate further.

                                                                          1. 1

                                                                            The comment makes understanding that line actively more difficult. If you’re reading the code for the purpose of coming to some initial understanding of a codebase then you’re going in with holes in your understanding and this inconsistent comment creates yet another hole.

                                                                            You’re suggesting that if I go to a random Github repo I’m not familiar with, start trying to read the code, and find a comment that is blatantly and objectively inconsistent with the code it’s attached to, that’s not actually a problem because if I just understood the codebase I’d know which to trust.

                                                                            You’re onto something there. That’s exactly the point: if context makes it clear which to trust, then you don’t need both, you just need the (correct) code. If we can make the assumption the misleading comment won’t matter because anyone reading it will know that the comment is wrong, then what is the comment doing there at all?

                                                                    1. 1

                                                                      This is exciting! The biggest thing that has put me off Go so far has been the lack of generics. (I remember the days before generics made it into Java and I’m not too eager to re-live that experience)

                                                                      I understand the rationale for not including generics from the start, but as far as I can tell it leads to a lot of boilerplate and unsafe code. Especially in a language with good support for first-class functions, not having generics always felt like a missed opportunity.

                                                                      1. 6

                                                                        I’ve been writing Go professionally for a few years now, and lack of generics doesn’t result in “unsafe” code, at least not in the Go sense of memory-unsafe. It can result in runtime type assertion failures if interface{} types are used and converted at runtime. It can also result in boilerplate, with repeated (type safe) functions like ContainsInt and ContainsString instead of Contains[T] for slices. However, that happens less often than you’d think, as (unlike early Java), the built-in slice and map types are “generic” already (parameterized by type) … you just can’t add user-defined generic types or functions. This gets most people surprisingly far.

                                                                        That said, I’m cautiously optimistic about generics. They seem to be taking a typically Go-like caution with the features, not overhauling the language or stdlib with the introduction of generics. Still, I suspect there will be a raft of new “functional programming” libraries and other such things at first which won’t really fit in the Go spirit. Then hopefully it’ll settle down.

                                                                        1. 3

                                                                          I hope they don’t take too long with introducing the genetics to the standard lib. They risk a number of third party solutions filling the void. Best case, it will be like Apache commons. Worst case, like perl OOP. (I guess rust async frameworks are somewhere in between)

                                                                          1. 2

                                                                            it’s not entirely clear where you want generics to go in the standard library. the only real place i can think of is the sort library, and last thing i knew that was already being worked on. the go standard library isn’t filled with collections types or whatever like rust or c++. there really are only one or two files in the entire standard library where generics even could be added, much less would be useful. there’s a reason they hadn’t been added yet, and it’s because you don’t need them 99.9% of the time if you’re programming in go idiomatically, which the standard library generally does.

                                                                            my biggest fear with the introduction of generics is what is articulated in the parent of your comment, which is that everyone starts burying all of their code in a thousand layers of generic gibberish. the last thing anyone should want is for go to turn into c++. the fact of the matter is that at the moment if you find yourself “needing” generics in your go project, you’re probably doing something wrong, and you’ll either need to restructure your project in a better way, or, in the very few cases when you can’t do that (because you actually do need generics) you may have to hack around it. when the introduction of generics goes mainstream, you won’t ever have to hack around their absence when you really need them, but that doesn’t mean you shouldn’t refactor around their unnecessary inclusion. more often than not, reflexive use of generics is a matter of premature optimization, which your code (particularly in go) will be far better off without.

                                                                            i think a lot of people have this mistaken impression that all the philosophical “a bit of copying is better than a bit of dependency” etc etc stuff in the go community arose as some sort of coping mechanism for the lack of several important features. the opposite is true in reality: these “missing features” were deliberately left out for the purpose of incentivizing what the language’s developers believed to be a better style of programming. whether the costs incurred are worth the benefits is up for debate. at the very least it appears to be widely understood that some annoyances arise from that, particularly boilerplate, and that’s where generics and the various error handling proposals that aren’t set in stone yet have come from.

                                                                            here’s what that doesn’t mean:

                                                                            • best practices have meaningfully changed
                                                                            • every function needs type parameters
                                                                            • every file needs type parameters
                                                                            • every project needs type parameters

                                                                            if go as an experiment has taught us anything at all it’s that in the vast majority of cases you absolutely can go without all of these things we’ve convinced ourselves we need. if you find yourself thinking you need to use generics for a particular problem, the fact that you now can should not make you any less apt to pause and consider whether there really isn’t any other way to model the solution to your problem.

                                                                        2. 1

                                                                          I will say that Java’s solution to everything was inheritance + downcasting in the days before generics, and I think people mistakenly put all of the inheritance unpleasantness of inheritance into generics when thinking of the bad old days of Java. Go locks generics and that’s not great, but it still feels a whole lot better than Java 1.0. Notable Go also has type inference, first class functions, and structural subtyping which contribute a lot to the overall experience relative to Java 1.0.

                                                                          1. 1

                                                                            I’m mostly leery of generics, because I worry that it will lead to code which is hard to follow and reason about.

                                                                          1. 7

                                                                            The tab style changed? I hadn’t even noticed.

                                                                            1. 4

                                                                              Tabs other than the current tab don’t render borders. it flattens the design and honestly looks better as far as I’m concerned, but every time a Firefox update comes out this side of the internet collectively freaks out about whatever changes have been made.

                                                                              1. 4

                                                                                Oh… Ya looks fine to me.

                                                                              2. 2

                                                                                Likewise. Sounds like bikeshedding to me. The only feature I’d like in Firefox is something like Opera workspaces. Trying to imitate them with bookmarks and windows is not up to scratch, and re-opening multiple windows can be flaky.

                                                                              1. 7

                                                                                I agree with the comment; this is pretty light on how the author actually uses it.

                                                                                1. 11

                                                                                  Yeah, it’s a really weird choice to conclude what is ultimately no more than a tutorial on setting up syntax highlighting in nano with a comment about how you’ve proven nano is as capable an editor as vim or emacs. It is and has for years been beyond me how nano could ever be useful outside of making trivial config file changes in a system you don’t have root access on – these days it seems more ubiquitous than vim or ed. I was hoping this article would clear that up.

                                                                                  Then again, maybe there’s nothing to clear up; maybe there really are people who have no further requirements for an editor than being able to type text and save it to a file. I don’t know.

                                                                                  1. 3

                                                                                    Some people can work perfectly fine with a minimal editor. For example Linus Torvalds with MicroEMACS.

                                                                                    1. 7

                                                                                      When I learned C, I decided to only use vi (not vim) without colors and without any custom config.

                                                                                      It’s a little weird at first, but the brain adapts (quickly) and recognizes the patterns. Now I don’t care which editor is on a system, or how it’s formatted on the web or in an e-mail.

                                                                                      1. 4

                                                                                        Instead of vi, I use vis. But, in there, I do the same: I disable the syntax highlight, and I only use the default settings of the editor.

                                                                                        I read somewhere, someday, that working with disabled syntax highlight makes the programmer more attentive to the code, and consequently make less mistakes.

                                                                                        I actually never measured it, but I instinctively feel that I read more carefully the code base, and therefore I learned the code base I work on better than before.

                                                                                        I also started to appreciate the [Open|Net]BSD code style, because it helps to work on this style, and to use default UNIX tools to find parts of the code I am interested at.

                                                                                        In other words, it leverages UNIX as IDE.

                                                                                        1. 2

                                                                                          I am thinking about switching from vim to vi + tmux for Go.

                                                                                          So far the most challenging was:

                                                                                          • block edit;
                                                                                          • go to definition;
                                                                                          • copy/paste;

                                                                                          Especially copy/paste. It turns out I heavily relied on yanking from one vim tab to another.

                                                                                          1. 1

                                                                                            Which vi? nvi?

                                                                                            1. 1

                                                                                              The version that came with the OS. Seems like nvi or at least based on nvi.

                                                                                        2. 2

                                                                                          It’s ubiquitous because it’s just what I’d expect from a debian system that some non-vim professional might have to administrate via CLI. And for anything that isn’t changing configs on remote systems / rescue mode I’ve got an IDE or Kate if it’s supposed to be simpler.

                                                                                      1. 7

                                                                                        The takeaway shouldn’t be “programming languages are for REAL PROGRAMMERS like Mel, who know what a monad is” - it’s that Go is such a dull blade it hurts the productivity of workman programmers trying to get done; i.e. constantly typing if (err != nil).

                                                                                        If you want an example of how you can do the good PLT shit in a mainstream, pragmatic, workman’s language, I think C# is a great example. The average enterprisey programmer doesn’t need to understand the minutae of generics, but a List<string> is far better than ArrayList and dealing with covariance. Likewise, there’s unsafe, but you’ll never need to know it exists until you need it, nor will not knowing it harm you unless you’re working on systems code. It handles the progressive disclosure of concepts well.

                                                                                        To elaborate further, Java shares a lot of the paternalistic “you’ll cut yourself on this blade” philosophy, and it ends up making the language even more confusing:

                                                                                        • Operator overloading is hard. Now you have to explain the difference between == and .equals.
                                                                                        • Value types are hard. Now you have to explain the difference between Integer and int.

                                                                                        Your simple language is no longer simple to explain. Then you get the other fun stuff like erasure-based generics and lack of unsigned. C# plucked the low hanging fruit on the “what’s wrong with Java” tree and dealt with the hard stuff later; to its benefit. All because people like Syme have a better outlook of the average programmer than Pike.

                                                                                        1. 3

                                                                                          Most Go programmers don’t really like the design philosophy of Java a whole lot in my experience, so there won’t be any disagreement on most of the actual points in your comment. It’s just confusing how you think this relates to Go beyond you claiming they’re both designed to limit footguns and then listing several Java-specific footguns that don’t apply to Go. You’ve demonstrated Java is a painful language to use (it is, you’re right, most Go programmers will agree; its approach to object orientation in particular is nightmarish enough that I come very close to writing “object orientation was a mistake” blog posts every time I’m forced to use Java) but you’ve then claimed this somehow means Go is too. None of the problems you list with Java have any particular analogues in Go. This is a very confusing comment.

                                                                                          I’m sorry you happen to like typing catch more than != nil but I’m happy I get to be the first to tell you that it’s only one extra character. It’s never been demonstrated to me exactly why typing err != nil every once in a while is actually all that traumatic. It’s comparable in code footprint to typing out try/catch, but nobody considers those horribly annoying, terrible, awful boilerplate that destroy your productivity as a programmer.

                                                                                          And as near as I can tell the idea that typing err != nil is a huge drain on programmer productivity is the only thing you actually say about the Go programming language here.

                                                                                          1. 3

                                                                                            I’m sorry you happen to like typing catch more than != nil but I’m happy I get to be the first to tell you that it’s only one extra character. It’s never been demonstrated to me exactly why typing err != nil every once in a while is actually all that traumatic. It’s comparable in code footprint to typing out try/catch, but nobody considers those horribly annoying, terrible, awful boilerplate that destroy your productivity as a programmer.

                                                                                            I don’t want exceptions, I want discriminated unions and pattern matching to make it less tedious. I’m not a smart person that can tell you about category theory, but I know it’s useful.

                                                                                            1. 1

                                                                                              Oh, okay, well that’s valid, I suppose, although in the general use-case for pattern matching in most languages it’s equivalent to sugared-over switches. I only referred to try-throw-catch because the only language your comment expressed admiration for was C#, which to the best of my knowledge uses try-catch. Which raises the question of why you like C# and don’t like Go if the only criticism you actually brought up of Go is in relation to something you don’t seem to disagree is equivalent to the way it’s done in a language you claim to like.

                                                                                              1. 1

                                                                                                I have many qualms with C# (mostly related to its current direction), but I was mentioning it in terms of a language intended for median programmers that trusts them with complex tools through making the complexity manageable.

                                                                                            2. 1

                                                                                              With exceptions you can let them propagate up the callstack automatically. With Go you need to have error-handling boilderplate injected at nearly every point in your code. That’s more than a single character difference.

                                                                                          1. 8

                                                                                            So a slightly longer version of http://www.golang.sucks/ ?

                                                                                            1. 12

                                                                                              The entire existence of a .sucks TLD is already …ugh… And it costs a whopping $199 to register one.

                                                                                              Imagine being such an unimaginative boring twat that you can’t think of anything better to do with $199 than to register a golang.sucks domain to list a few links.

                                                                                              That https://www.get.sucks/ is atrocious btw. It literally says right there on the front page:

                                                                                              Protect your identity online so that no one can defame your name.

                                                                                              It’s not even “wink-wink would be a shame if anyone would register yourcompany.sucks to spread bullshit nudge-nudge”. Nope, right there on the front page: “buy our service now so you can stop illegal defamation”.

                                                                                              1. 5

                                                                                                This was a minor controversy a while back. Vox Populi was charging trademark holders thousands of dollars to register their .sucks.

                                                                                                1. 3

                                                                                                  Imagine being such an unimaginative boring twat that you can’t think of anything better to do with $199 than to register a golang.sucks domain to list a few links.

                                                                                                  Oh yeah I know, that’s why I posted this link. TFA is just a shallow dismissal of Golang, so oft-repeated that there’s even a .sucks domain that rounds all of the criticism up. The reason it gets upvotes is because people here dislike Go, which is fine, but reminds me of the things I find odious about Lobsters. Biases here are so strong that trite content gets upvoted while the community frequently praises itself for being a good discussion site. 🤷 Language criticisms can be done well, but this isn’t it.

                                                                                                  1. 7

                                                                                                    People can’t downvote things they disagree with unless they actively break site rules, so the number of upvotes doesn’t tell you what the general community feels. Two comments dunking on the article have more upvotes than the article itself, which I see as a sign that the general community isn’t a huge fan of this kind of content.

                                                                                                    1. 4

                                                                                                      Eh, outside of tallying up points, I think anyone who has used this site can agree that literally any article relating to “I did this cool thing in Go” or that even just has a neutral tone with regard to Go gets about a thousand replies about how Go is a bad language. This thread is definitely the exception. It’s probably >75% of Go articles on this site that turn into endless “well, actually, Rust is good and if you like Go you’re an idiot” threads. It feels like this article in particular was just obviously obnoxious on its face, so nobody is really bothering to defend it.

                                                                                                      1. 2

                                                                                                        There’s no doubt here. If you take a look at the distribution of posts by inactive-users, so folks who’ve exited the site, most are folks who post threads with tags involving Go and C. It’s obvious to anyone who’s looking.

                                                                                                      2. 1

                                                                                                        Two comments dunking on the article have more upvotes than the article itself, which I see as a sign that the general community isn’t a huge fan of this kind of content.

                                                                                                        Yeah fair enough. I guess this is a consequence of a no-downvote policy.

                                                                                                        (Though I will say, +14 on the score and +19 (at time of this writing of course) on the top-level comment means that almost 42.4% of the “viewers” on this thread (assuming viewers are bifuricated between both options which certainly is overly optimistic) does not make me feel good (wait until this changes and this entire parenthetical is useless lol).)

                                                                                                        1. 4

                                                                                                          I’ve seen much worse articles get upvoted much higher.

                                                                                                          And I mean, it’s not like all the points in this article are necessarily bad either: Go is rather verbose, the error checks are controversial for a good reason, the lack of generics has been cited as a problem even by the Go authors going back over a decade ago, dependency management was an issue (but now solved). This is all true, and all fair points to criticize Go on.

                                                                                                          It’s just that the author doesn’t seem to understand why things are the way they are, what trade-offs are involved, and why other approaches weren’t chosen. Plus, there’s some nonsense mixed in to it as well (Go doesn’t allow to “abstract the details into types and provide encapsulation” … ehm, yes it does). Plus, at this point the horse is long dead and buried, and the 6th generational decedents of the horse are being beaten.

                                                                                                          It all makes it rather tiresome.

                                                                                                1. 15

                                                                                                  There should be formal semantics for the borrow checker.

                                                                                                  Rust’s module system seems overly complex for the benefit it provides.

                                                                                                  Stop releasing every six weeks. Feels like a treadmill.

                                                                                                  The operator overload for assignment requires generating a mutable reference, which makes some useful assignment scenarios difficult or impossible…not that I have a better suggestion.

                                                                                                  A lot of things should be in the standard library and not separate crates.

                                                                                                  Some of the “standard” crates are more difficult to use than they should be. I still can’t figure out how to embed an implements-the-RNG-trait in a struct.

                                                                                                  Async is a giant tar pit. The immense complexity it adds doesn’t seem to be worth it, IMHO.

                                                                                                  Add varargs and default argument values.

                                                                                                  1. 12

                                                                                                    I genuinely do not understand what people find complex about the module system. It’s literally just “we have a tree of namespaces”.

                                                                                                    1. 8

                                                                                                      “We have a tree of namespaces. Depending on how you declare it the namespace names a file or it doesn’t. Namespaces nest, but you need to be explicit about importing from outer namespaces. Also, there’s crates which are another level of namespacing with workspaces.”

                                                                                                      Versus something like Python: There is one namespace per file.

                                                                                                      (Python does let you write custom importers and such but that’s truly deep magic that is extremely rarely used.)

                                                                                                      I’m not saying there aren’t benefits to the way Rust does it. I’m saying I don’t feel like the juice is worth the squeeze.

                                                                                                      EDIT: @kornel said it better: https://lobste.rs/s/j7zv69/if_you_could_re_design_rust_from_scratch#c_3hsii6

                                                                                                      1. 5

                                                                                                        I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me

                                                                                                        Depending on how you declare it the namespace names a file or it doesn’t.

                                                                                                        New file means a new namespace (module), new namespace (module) doesn’t mean a new file.

                                                                                                        1. 4

                                                                                                          I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me

                                                                                                          It was the opposite for me, for whatever reason; it feels like there’s active friction between my mental model of namespaces and the way Rust does it. It’s weird.

                                                                                                          You know, I kinda got the same mental friction feeling with namespaces in Tcl. I couldn’t tell you why. Maybe I just hate nested namespaces…

                                                                                                          1. 2

                                                                                                            I’ve over and over and over again heard from beginners that the docs do a notably bad job communicating how it works, in particular those that are the easiest to get your hands on as a beginner (the rust book and by example). They deal almost exclusively with submodules within a file (i.e. mod {}), since it’s difficult to denote multiple interrelated files in the text, playground example, text, playground example idiom they decided to use.

                                                                                                            When they briefly do try to explain how the external file / directory thing works they say something like “you used to need a file named mod.rs in another directory but now in Rust 2018 you can just make a file named (the name of the module).rs” which is a really poor explanation of how that works and is also literally incorrect. Like, you can go without mod.rs but if you want to arrange your code into a directory structure you still need mod.rs. There have been issues on the Github for the rust book about making the explanation coherent (or more trivially making it actually true) but the writers couldn’t comprehend that it isn’t immediately intuitive to beginners and have refused to make very basic changes like having it just say something like “when you write mod foo, the compiler looks in the current directory for either foo.rs or foo/mod.rs”. A lot of the problem here is the mod.rs -> modname.rs addition. It’s an intuitive QOL improvement to people already familiar with the modules system but starting from no understanding of the modules system it makes it infinitely more difficult for newbies to understand.

                                                                                                          2. 5

                                                                                                            Hmm, I feel like the following set of statements covers the way the module system works:

                                                                                                            • We have a tree of namespaces, which is called a crate
                                                                                                            • Declaring a module…
                                                                                                              • …with just a name refers to a file in a defined location relative to the one containing the declaration
                                                                                                              • …with a set of curly braces refers to the content of those curly braces
                                                                                                            • You have to explicitly import anything from outside the current module (file or mod {} block)

                                                                                                            In practice, modules are almost always declared in separate files except for test modules, so it ends up being “there is one namespace per file” most of the time anyway.

                                                                                                            I don’t really see what about that is all that complicated.

                                                                                                          3. 6

                                                                                                            As someone who just dabbles with rust, it still confuses me. I know I’d get it if I used it more consistently, but for whatever reason it just isn’t intuitive to me.

                                                                                                            For me, I think the largest problem is that it’s kind of the worst of both worlds of being neither an entirely syntactic construct nor being filesystem based. Rather, it requires both annotating files in certain ways and places, and also putting them in certain places in the file system.

                                                                                                            By contrast, Python and Javascript lean more heavily on the filesystem. You put code here and you just import it by specifying the relative file path there.

                                                                                                            On the other end of the spectrum you have Elixir, where it doesn’t matter where you put your files. You configure your project to look in “lib”, and it will recursively load up any file ending in .ex, read the names of the modules defined in there, and determine the dependency graph among them. As a developer I pop open a new text file anywhere in my project, type defmodule Foo, and know that any other module anywhere can simply, e.g., import Foo. For my money, Elixir has the most intuitive system out there.

                                                                                                            Bringing it back to rust, it’s like, if I have to put these files specifically right here, why do I need any further annotation in my code to use those modules? I know they’re there, the compiler knows they’re there, shouldn’t that be enough? Or conversely, if I’m naming this module, then why do I have to put it anywhere in particular? Shouldn’t the compiler know it by name, and then shouldn’t I be able to use it anywhere?

                                                                                                            I’m also not too familiar with C or C++ which is what it seems to be based on. I get that there’s this ambient sense of compilation units, and using a module is almost like a fancy macro that text substitutes this other file into this one, but that’s not really my mental model of how compilation has to work.

                                                                                                            1. 1

                                                                                                              Hey, thanks, this is some interesting food for thought!


                                                                                                              I’m also not too familiar with C or C++ which is what it seems to be based on.

                                                                                                              I think they’re actually based on ML modules. They’re not really similar to C/C++… I’d actually describe it as more similar to python than C/C++ (but somewhere in the middle between them).

                                                                                                              and using a module is almost like a fancy macro that text substitutes this other file into this one,

                                                                                                              I think the mod module_name; syntax is actually exactly a fancy macro that does the equivalent of text substitution (up to error messages and line numbers). Of course it substitutes into the mod module_name { module_src }` form so module_src is still wrapped in a module.

                                                                                                            2. 8

                                                                                                              Rust’s module model conceptually is very simple. The problem is that it’s different from what other languages do, and the difference is subtle, so it just surprises new users that it doesn’t work the way they imagine it would.

                                                                                                              Being different, but not significantly better, makes it hard to justify learning yet another solution.

                                                                                                              1. 2

                                                                                                                Do i need to declare my new mod in main.rs or in lib.rs? What about tests? Why am I being warned about unused code here, when I use it? Why can I import this thing here but not elsewhere?

                                                                                                                I think the way all the explicit declaration stuff is really un-nerving coming from Python’s “if there’s a file there you can import it” strategy. Though I’m more comfortable with it now, I still wouldn’t be confident about answering questions about its rules

                                                                                                              2. 9

                                                                                                                What benefit is there to releasing less often?

                                                                                                                1. 11

                                                                                                                  Another user on here (forgive me, I can’t remember who) said it well: if I cut my pizza into 12 slices or 36 slices, it’s the same amount of pizza but one takes more effort to eat.

                                                                                                                  Every six weeks I have to read release notes, decide if what’s changed matters to me, if what counts as “idiomatic” is different now, etc. 90% of the changes will be inconsequential, but I still gotta check.

                                                                                                                  Bigger, less frequent releases gives me the changes in a more digestible form.

                                                                                                                  Note that this is purely a matter of opinion: obviously a lot of people like the more frequent releases, but the frequent release schedule is a common complaint from more than just me.

                                                                                                                  1. 3

                                                                                                                    This would be purely aesthetic, but would bundling release notes together and publishing those every 2 or 3 releases help?

                                                                                                                    1. 8

                                                                                                                      Rust tried to do it with the “Edition Guide” for 2018 which — confusingly — was not actually describing features exclusive to the new 2018 parsing mode, but was a summary of the previous couple of years of small Rust releases.

                                                                                                                      The big edition guide freaked some people out, because it gave impression that Rust suddenly has changed a lot of things, and there were two different Rusts now. I think Rust is damned here no matter what it does.

                                                                                                                2. 2

                                                                                                                  Not sure the issue you’ve hit with embedding something that implements the Rng trait in a struct. Here’s an example that does just that without issue.

                                                                                                                  1. 1

                                                                                                                    Replying again just for future reference.

                                                                                                                    I don’t remember exactly what I was doing but I ended up running into this:

                                                                                                                    for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
                                                                                                                    

                                                                                                                    Point is, I got to that point trying to have an Rng in a struct and gave up. :)

                                                                                                                    My solution was to put it in a Box, but that didn’t work for one of the Rng traits (whichever one includes the seed functions), which is what I wanted.

                                                                                                                    Either way, I obviously need to do more research. Thanks.

                                                                                                                    1. 1

                                                                                                                      Thank you, I appreciate that. My problem boils down to not knowing when to use Box and when to use Cell, apparently.

                                                                                                                      1. 3

                                                                                                                        Box is an owned pointer, despite being featured so prominently it doesn’t have many uses. It’s basically good for

                                                                                                                        • Making unsized things (typically trait objects) sized
                                                                                                                        • Making recursive structs (otherwise they’re infinite sized)
                                                                                                                        • Efficiency (moving big values off of the stack)
                                                                                                                        • C ffi
                                                                                                                        • (Probably a few things I forgot, but the above should be the common cases)

                                                                                                                        RefCell is a single threaded rw-lock, except it panics where a lock would block because blocking on a single threaded lock would always be a deadlock. It’s purpose in life is to move the borrow checkers uniqueness checks from compile time to runtime.

                                                                                                                        In this case, you don’t really need either. We can just modify the example so that make takes a mutable reference, and get rid of the Refcell. See here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=6f64a7192a1680181200bf577c285b9d

                                                                                                                        1. 2

                                                                                                                          Yup, I used RefCell here because I don’t think the changing internal state of the random number generator is relevant to the users of the CharacterMaker, so I preferred make to be callable without a mutable reference, but that’s an API design choice.

                                                                                                                  1. 16

                                                                                                                    I personally like the path that gnome is on. I get that not everyone does and that’s fine, but ever since gnome 3 I’ve really fallen in love with gnome.

                                                                                                                    1. 3

                                                                                                                      The problem is that their hold over GTK means that every other DE is forced to either follow suit in whatever Gnome comes up with, or invest in expensive workarounds (see CSD).

                                                                                                                      I encourage Gnome devs to do whatever they want, but as a non-Gnome-user it gets tiring to be at the receiving end of their “innovations” without having much choice. I just want to be left alone.

                                                                                                                      1. 5

                                                                                                                        It’s far from only GTK. Their hold permeates the stack and slowly but surely forces more and more of these things to fit a certain mold as their ‘vision’ (OSandroidX with glaucoma) can only really be fulfilled with strong coupling across a wide range of desktop-system services. What little competition is left will be forced to write adapters (eudev, elogind, …) until they architecturally become more or less the same but lagging behind, accept obscurity, or run out of steam entirely and join the retro-computing trend.

                                                                                                                      2. 2

                                                                                                                        I used to be very anti-gnome (“you’re turning my desktop into a phone?!”), but I’ve been using gnome for like three months because I wanted to actually give it a chance after using contrarian window managers for like a decade. I threw on dash to panel, made it very un-gnomey, etc. Gnome 40 came out and broke basically every extension, but I didn’t have time to transition to anything else at the time, so I was forced to just run with them turned off. With nothing more than the Yaru-remix shell/gtk/application theme running I started to really like how default gnome 40 looks and feels with just one theme and a few extra keyboard shortcuts turned on. Pretty sure even after all the recommended extensions start getting fixed I might stick with this.

                                                                                                                      1. 19

                                                                                                                        I love Alacritty, the only thing I’m still missing is ligatures support.

                                                                                                                        1. 1

                                                                                                                          There’s a fork that keeps up-to-date with upstream that adds ligature support. I seem to remember seeing an explanation several months ago for why it (or something like it) hadn’t been merged in but I don’t remember and can’t seem to find it right now. The only discussion I can find about it is a brief talk in upstream’s pinned ligature issue.

                                                                                                                          But it works for me with Iosevka. I have no idea how other fonts are working, but half of the reason I use alacritty is that this fork exists and is one of the only terminal emulators I could find (outside the ones bundled with DEs) that actually has ligature support.

                                                                                                                          1. 2

                                                                                                                            Thanks! I think that this is the reason? (harfbuzz only works on linux/bsd. On other platforms core text and direct write should be used.)

                                                                                                                        1. 1

                                                                                                                          How do you prevent self hosted mail from getting caught in a spam filter? I never quite understood how to prevent that.

                                                                                                                          1. 6

                                                                                                                            Easy, you don’t.

                                                                                                                            There’s a lot of stuff you can do to reduce the likelihood:

                                                                                                                            • setup DKIM
                                                                                                                            • setup SPF
                                                                                                                            • ensure DNS is setup (MX record for your domain pointed to your email server, A/AAAA for your mail server set (e.g. if your mail server says it’s name is mail.example.org, an A record for mail.example.org is connected to the IP it’s sending out from), optimally set reverse DNS to match
                                                                                                                            • make sure the domain you’re using is clean (not something you normally need to care about of you’re using a domain you’ve owned for ages that’s particularly unique, but could run afoul of a blacklist if you’re buying an aftermarket domain or an available domain that’s changed hands in the recent past)
                                                                                                                            • make sure the IP your mail server uses is clean (generally not a huge issue either, but some providers are notorious for having whole IP ranges blacklisted)

                                                                                                                            But at the end of the day, Gmail, Outlook(/live.com/Hotmail/MSN), et al. are still gonna think you’re suspicious until they’ve started seeing users interact with you / flag messages you send as “not spam”.

                                                                                                                            In my personal experience Gmail is more lenient than Outlook at “first time sender” type stuff, but all the big players generally care just as much about your domain/email server’s “reputation” with them as they do about the technical correctness of your setup.

                                                                                                                            1. 7

                                                                                                                              I’m going to say that again, and again, and again… That’s an argument for hosting your own email, not against it. Every self-hosted email user giving up and switching to one of the oligopolists is a win for the oligopolists.

                                                                                                                              Dropping mail from self-hosted servers is a way for them to get more users (all while often cheerfully accepting spam from hijacked accounts on big services, including their own). Whether they are doing ot intentionally or not doesn’t matter—they are aware of the problem or could easily find out if they wanted to, but they are doing nothing to improve the filters to actually detect spam.

                                                                                                                              1. 5

                                                                                                                                Something I don’t think gets mentioned enough to go along with the domain part of this is that a lot of the new, hip, trendy TLDs are instant points dinged against you for the configurations of SpamAssassin et al that a lot of incoming mail servers use.

                                                                                                                                I messed around with Mail-In-A-Box for half an hour or so on a fresh .space domain one time a little under a year ago and took a peek at a SpamAssassin score test to see why mail to my own Gmail was bouncing (not just getting filtered to spam), and it turns out you can be half done for from the start if you don’t have a domain on a tried-and-true TLD. Even with DKIM etc all configured and clearing SpamAssassin properly, my score was only in range to hit the spam filter instead of bouncing, and it still bounced on Gmail, although that may have had to do with the now-repeated attempts to get through, or maybe some caching of my untrusted status from when DKIM wasn’t set up properly(?). In any case, between IPs, TLDs, and resold domains, it’s wild how easy it is to end up in a situation where there’s nothing you can do about how spam filters see you, even if you’ve never sent spam in your life.

                                                                                                                                1. 1

                                                                                                                                  I had the same issue in the past when I owned arrrgh.pw (I know, it makes a pretty cool email !). Turns out .pw is simply blacklisted in most email filters, and I could never get a single mail delivered. Someone then told me it’s because these TLDs are cheap, and heavily registered for spamming purposes. Big mailers simply spamlist them by default just in case. The advice that came after that was to choose a domain that’s not cheap (~30$/year), and fo with it. I did that, and never had a problem with getting my mails delivered since.

                                                                                                                                2. 2

                                                                                                                                  make sure the IP your mail server uses is clean

                                                                                                                                  That is why I send my mail through Mailgun for a small mailing list I host privately (ie the VM relays through them).

                                                                                                                                  1. 2

                                                                                                                                    In my personal experience Gmail is more lenient than Outlook at “first time sender” type stuff, but all the big players generally care just as much about your domain/email server’s “reputation” with them as they do about the technical correctness of your setup.

                                                                                                                                    True. One thing that helps is to have a GMail sender mail you a few times to your own domain ; or if you have an account there, set it up to forward every email it gets to your personal address. I guess you could do the same-ish with Outlook, etc. Still annoying.

                                                                                                                                    I once worked for a mail-delivery shop, and both Microsoft and Google provide tools for “professional senders” to monitor their IP addresses reputations, and ensure that the customers’ email blasts hit the inbox. You kind of have to be a pro player to be able to send “bacn” at will, but if you’re trying to share baby pictures with grandma, you’re out.