Threads for ralish

  1. 4

    After discussing with the Steering Council, we are considering delaying the final release until December to allow for two more beta releases.

    That’s disappointing. They couldn’t even keep up the yearly cadence for more than a single cycle. I wonder if they tried to tackle too much.

    1. 15

      I’d be far more disappointed if they released a low-quality release to meet an arbitrary release cadence. Agree it’s a shame, but it’s the right call.

      1. 11

        Afaik this was partly just because 3.11 had a bunch of very fundamental changes that ended up resulting in more drastic bugs than expected.

        1. 3

          Don’t other projects that are trying to do a release cadence only promote big or numerous changes once they as a collective reach a certain stability?

      1. 3

        This is both a marvelous write-up and a reminder as to why I am near exclusively a console gamer these days.

        1. 4

          It’s funny that game consoles become more and more like pcs with a single fixed configuration. Almost like apple hardware.

          1. -1

            Not really, it’s the fixed configuration (and appliance-like customer lockout) that hass the value, not having a weird low-numbers CPU.

            1. 5

              it’s the fixed configuration (and appliance-like customer lockout) that hass the value

              I mean, you literally just described game consoles and apple hardware, so I’m not sure what your point is?

              1. 4

                I think his point is that, probably until about 10 years ago, consoles shipped differentiating hardware. You could do things on an 8-bit Nintendo that you couldn’t do on a commodity general-purpose computer at the time, even though the PC was more expensive. When 3D acceleration started to be the norm (around the PS1 / N64 era), consoles had very exciting accelerator designs to get the best possible performance within a price envelope. Most PCs didn’t have 3D accelerators at all and when they did they were either slower than the ones in consoles or a lot more expensive.

                Over time, the economies of scale for CPUs and GPUs have meant that a mostly commodity CPU and GPU are faster than anything custom that you could build for the same price. Consoles typically have custom SoCs still (which makes economic sense because they’re selling a large number of exactly the same chip), but most of the IP cores on them are off-the-shelf components. They even run commodity operating systems (Windows on Xbox, FreeBSD on PS4), though tuned somewhat to the particular use case.

                It’s unlikely that a future console will have much custom hardware unless it is doing something very new and exciting. HoloLens, for (a non-console) example, has some custom chips because off-the-shelf commodity hardware for AR doesn’t really exist and so a console wanting to do AR might get custom chips.

                Even in the classic Nintendo era, the value of consoles to developers was twofold:

                • They had hardware optimised for games.
                • Every single instance of the console had exactly the same hardware and so the testing margin was small.

                The first is now far less important than the second. This is somewhat true for the Apple hardware but the scales are different. The Xbox One, for example, came out in 2013. The Xbox One S was almost identical hardware, just cheaper. The Xbox One X wasn’t released until 2017 and was faster but any game written for the older hardware would run fine on it, so if you weren’t releasing a AAA game then you could just test on the cheaper ones. The Xbox Series X/S were released 7 years later. If it has a similar lifetime, that’s four devices to test on for 14 years. Apple generally releases at least 2-3 models of 4-5 different product lines every year.

                1. 1

                  How is it funny? It’s sensible and predictable, and IMO kind of a bummer.

          1. 25

            I have watched the videos behind this text and I’m a bit frustrated. The most problems they have are either hardware problems or problems because they expect thinks work like on Windows (or believe they work on Windows).

            For the hardware the somehow acknowledge that this is more the problem of the vendors then of Linux. It still sounds the most time more like Linux is bad because this super fancy hardware don’t work. Yes I know the problems behind this are complex and as a normal user this is frustrating.

            And of course they expect a Windows like behavior, they have used Windows for years. What bugs me is that they claim that the Windows way is the better way without understanding what the problem is. There are two examples for this:

            First the Linus broke his Pop!_OS installation while he tried to install steam. This was because the steam package had a dependency problem which could only resolved by removing essential packages. The GUI tells him there was an error with some suggestions what might case the problems and output from apt hidden behind a details button. He reads out loud: “Warning: you trying to remove the following essential packages”. So he googled and found the command line to install steam. So the command prompted him a lot of text and at the end following two lines:

            You are about to do something potentially harmful

            To continue type in the phrase ‘Yes, do as I say!’

            So he typed in “Yes, do as I say!” and his installation was broken. He claimed later: “the thinks that I did are not entirely ridiculous or unreasonable”. He ignored all warnings and “dictated the computer” “Yes, do as I say!”, how is this not a clear user error[0]?

            So lets look what would had happen with a similar issue under Windows. First so similar we don’t get the issue, because under Windows there is no package manager accessible for somehow third party software. So lets assume there is an windows update which removes the wrong file and breaks your system. On the install the update would remove the wrong file and breaks your system. Other example the steam installer manage to have a bug with removes some necessary files from your Windows installation. Is there anything Windows protect you from this bug[1]?

            It’s late and the other stuff about the file exertion issue I might write tomorrow.

            [0] Of course this has also something to do with the spirit of some developers to create popups/warnings/info messages. Which leads users to ignore these messages.

            [1] I don’t know, but a few years ago windows installers where just executable which required to run as administrator.

            1. 32

              And of course they expect a Windows like behavior, they have used Windows for years

              I think the “Windows-like behaviour” in this case is that on Windows Steam works perfectly, you don’t have to think about installing it, there’s no chance it’s going to break your OS, nor will you have to you choose between installing an application you want and having a working OS.

              We could imagine a hypothetical Steam bug that somehow wrecks Windows installations, but in reality those don’t exist.

              1. 4

                I think those kinds of comparisons don’t work very well, because of the range of options. For the Steam installation issue, on windows you basically have two options: you install it and it works or it doesn’t. In Linux you have the same two options + playing around with various tweaks and installation methods.

                If we were going with a typical user windows-like approach, he’d declare it a failure after Steam failed to install from the default source. Going further with other solutions is both a good thing because it’s possible and a bad thing, because newbies get into a situation like a broken desktop. So once you start going past the basics it’s really on the user to understand what they’re doing. Otherwise it’s comparable to “I messed with some windows DLLs / registry trying to get Steam to work, despite warnings and now it doesn’t boot” - but that’s just not something average users do.

                1. 14

                  on windows you basically have two options: you install it and it works or it doesn’t

                  On Windows you install Steam and it works. Installing Steam and it not working isn’t really an experience people have with Steam on Windows.

                  In Linux you have the same two options + playing around with various tweaks and installation methods.

                  I guess? But the Linux (Pop!_OS?) equivalent of “I messed with some windows DLLs / registry trying to get Steam to work, despite warnings and now it doesn’t boot” is [0] kind of the only experience that was available? It seems like there was no way to install it and have it work, or even install it and have it just not work. The only way to install it broke the OS?

                  [0] Disclaimer: I didn’t watch the videos, so I’m going off my understanding of the comment I originally replied to

                  1. 12

                    Installing Steam and it not working isn’t really an experience people have with Steam on Windows.

                    Not just that but you actually do have a lot of tweaks to play around with. They’re not common knowledge because it’s incredibly rare to need it in order to get something like Steam working. You don’t really need them unless you’re developing software for Windows.

                    I had this “it’s a black box” impression for a long time but 10+ years ago I worked in a Windows-only shopped that did a lot of malware analysis and the like. It’s quite foreign, since it comes from a different heritage, but the array of “power tools” you have on Windows is comparable to that of Linux. The fact that typical users don’t need them as frequently is a good thing, not an evil conspiracy of closed-source vendors to make sure you don’t have control over your hardware.

                    1. 3

                      Installing Steam and it not working isn’t really an experience people have with Steam on Windows.

                      That’s a bit hard to quantify, but sure they do. Just search for “Steam won’t start” or “steam installer fails” on Reddit or their forums. It’s also common enough for many SEO-spam sites to have listicles for that phrase that are actually steam-specific.

                      And my point was that this wasn’t there only experience available. The alternative was not to type “yes I’m sure I know what I’m doing” (or whatever the phrase was) when he did not. He went out of his way to break the system after the GUI installer refused to do it. I think you really should watch they fragment for the discussion context.

                  2. 3

                    there’s no chance it’s going to break your OS

                    Presumably because it’s the primary platform they test for.

                    1. 1

                      there’s no chance it’s going to break your OS

                      Of course with a simple installer (copy all files to a directory and add an entry to the windows registry) it’s quite hard to have a bug with breaks your OS. But a simple installer don’t have the features a package management system, i.e. central update mechanism. I don’t want to say package manager are better then the installer way used on Windows[0]. The problem I have with this case it’s not he has clicked some random button and then everything was broken. He has read the error, ignored all warnings and typed the prompt char by char and then wounder why it’s goes wrong.

                      I don’t say the UI[1] is perfect. The problem I have is this “I ignore all warnings and complain if it goes wrong” mentality[2]. apt is not a program witch bugs you with unnecessary questions or warnings. Install a package only ask for conformation if it does more then only install the requested package. The annoying conformation question is there only if you try to remove essential packages and is designed to give enough hassle to bring the user to question about this commands.

                      [0] I think systems with package manager are better, but this is not the point of the comment

                      [1] The error message in the GUI and the handling in the command line

                      [2] Yes some (or most) users don’t want to understand error messages, but shouldn’t they not stop at the error and look for (professional) help? And no copy paste a command from a random blog post is not help, if you don’t understand the error or the blog post.

                    2. 21

                      The entire point of Linus’ challenge is that desktop Linux is full of barriers and traps for new users who don’t (yet) know what they’re doing.

                      Explaining “well, it’s like that because you told it to florb the waggis instead of confeling rolizins, so it’s all your fault” may very well be technically correct, but it doesn’t change the fact that the OS hasn’t worked well for the user. “I want to install Steam in 5 minutes without learning about package sudoku solvers, or bricking my computer” is an entirely reasonable use-case.

                      Web dev community had a reckoning with this, and thinking has changed from “users are too stupid to understand my precious site” to “all my new users know only other sites, so I must meet their expectations”. If Linux wants to get new users it needs to be prepared for users who know only Windows, macOS, or even just Android/iOS.

                      1. 4

                        Explaining “well, it’s like that because you told it to florb the waggis instead of confeling rolizins, so it’s all your fault” may very well be technically correct, but it doesn’t change the fact that the OS hasn’t worked well for the user. “I want to install Steam in 5 minutes without learning about package sudoku solvers, or bricking my computer” is an entirely reasonable use-case.

                        That’s well and good, but there is a perfectly good fast path for this; install Pop!_OS or Ubuntu on a day where there’s not a bug in the packaging system, which is the vast majority of all days. Yep, it sucks that there was a bug, but that’s simply not going to affect anyone going forward - so why are LTT giving advice based on it?

                        1. 11

                          For every distro D there exists a problem P that is solved in a distro E.

                          That endless cycle of “then abandon your whole OS and install a completely new one” thing is another annoying problem “Linux desktop” has. It’s not any single distro’s fault, but it’s a pain that users need to deal with.

                          In my case: I want to use Elementary, but I hosed it trying to update Nvidia drivers. So I was told to switch to Pop!_OS — they do it right. But this one gets stuck seemingly forever when trying to partition my disk, presumably because of the combination of NVMe and SATA that I have. Manjaro worked with my disks, but I’ve run into bugs in its window manager, which wouldn’t be an issue in Elementary. I still use macOS.

                          1. 6

                            For every distro D there exists a problem P that is solved in a distro E.

                            Right, I agree that in general this is a problem; we need better ways to integrate the best ideas from multiple projects.

                            But for the problem stated, which was “I want to install Steam in 5 minutes without learning about package sudoku solvers, or bricking my computer”, Pop!_OS or Ubuntu are the way to go. Your problem is not that; it’s “I want Pantheon and a fast Nvidia card,” and Nvidia have intentionally made that harder than it needs to be.

                            To be totally clear, I’m under no illusions that every user can simply pick up a free desktop and be on their way, but I think it’s pretty unhelpful to cultivate a discourse which simultaneously says “Users should have a fast path for these common use cases” and “Users should be able to get whichever window manager, packaging system, and customizations they want.” Those are both valuable goals, but the former inherently precludes the latter, especially in a world where some hardware companies, like Nvidia, are actively hostile to free desktop projects.

                        2. 3

                          I switched from windows to Mint a couple of years back for gaming, in a similar experiment to this one (only not so public). I had no issues at all, steam was in the official applications, it installed with one click. Every game that steam claimed worked on linux did work. There were issues with my non-standard multi-monitor set up (there were issues with this in windows too, but they were worse in linux*) but nothing that prevented playing the games. It was only once I enabled the steam beta program which sets steam to attempt to open all games in wine that I had to get down in the weeds with configuring stuff and some things didn’t work. Steam has pretty clear warnings about this when you turn it on though.

                          I feel like for a tech tips site those guys are pretty non-technical. I never really watched their stuff anyway but now it seems like they should be calling me for help (and I am pretty noob when it comes to linux). This is the biggest criticism for me of this whole experiment. If these guys are an authority on computer tech informing users, they should simply be better at what they do. It is almost like they are running an investment advice channel and going ‘oh no I lost all my money investing in a random startup, guys don’t do the stockmarket it’s broken’. They should be informing people interested in linux what to do and what not to do, and if they are not qualified to do that they should state that and recommend alternatives sources of advice.

                          *I have a suspicion most of these issues were on the application level not the OS level. Games were probably getting the monitors list from the wrong place. Ironically once I set my monitors up in the way that the developers on both windows and linux were expecting me too, the problems on linux disappeared, but a few small issues persisted on windows.

                          1. 2

                            As a waggis / rolizins engineer, maybe I’m out of touch, but I don’t think “Doing this will cause everything to break. If you want everything to break, then type ‘Yes, please cause my computer to break!’” is quite as obscure a message as anything about florbing and confeling. This required not only a (very rare) bug in the dependency tree but also either a user that deliberately wanted to break his Linux install for YouTube content, or one that is the very embodiment of Dunning-Kruger.

                            Not only did the dependency tree break, but the package manager was smart enough to recognize that the dependency tree had broken, and stopped him from doing it and told him so. He then went out of his way and used another package management tool to override this and allow him to break his installation anyway. This tool then was also smart enough to recognize the dependency tree was broken, and again warned him what was about to happen. He read this message and copied the text from this warning into a confirmation prompt.

                            He could just as easily have typed sudo rm -rf /usr. He could just as easily have deleted system32 on Windows.

                            The only possible solution that could have prevented him from doing this would be to not tell him his own sudo password and to give him a babysitter to do everything requiring privilege escalation for him so he doesn’t hurt himself, but that solution has logistical issues when you try to scale it up to every desktop Linux user.

                            1. 3

                              You need to have more empathy for the user.

                              The prompt wasn’t “destroy my system”, it was “do as I say”, and user said to install Steam.

                              No other operating system is stupid enough to delete itself when you tell it to add a new good application from a reputable publisher. Crappy Windows installers could damage the OS, but Steam doesn’t.

                              It’s normal for OSes to sound dramatic and ask for extra confirmation when installing software from unusual sources, so the alarming prompt could easily be interpreted as Linux also warning about dangers of “sideloading” a package, which can be dismissed as “I’m not installing malware, just Steam, so it’s fine”.

                              From user perspective the screen contained “Install Steam, wall of technogibberish user didn’t ask nor care for, type ‘Yes, do as I say!’”. The system frequently requires to type weird commands, so it requiring to type one more weird command wasn’t out of ordinary.

                              The only possible solution… [condescending user blaming]

                              The real solution would be for Linux to work properly in the first place, and actually install Steam instead of making excuses. Linux is just an awful target for 3rd party applications, and even the other Linus knows this.

                              1. 2

                                No other operating system is willing to give the user the ability to break the desktop environment intentionally (though I recall a lot of Windows bugs in the past that did this unintentionally). One of the fundamental problems Linux faces is that most users don’t actually want as much power as running as root gives you. They’ll say the do, but they really don’t, and their operating system choice generally reflects that.

                                It’s normal for OSes to sound dramatic and ask for extra confirmation when installing software from unusual sources

                                This is pretty obviously because it’s axiomatically impossible for the OS to actually tell if something the user does with sufficient privileges will break something (inter alia, you’d have to be able to solve the Halting Problem to do this). In this case, the package manager was obviously correct, which should be applauded. There are two obvious responses to this (maybe there are non-obvious ones I’m missing as well): restrict the user’s ability to do things to actions with a low likelihood of breaking the OS or trust the user to make a decision and accept the consequences after a warning.

                                Broadly, Windows, the MacOSs, and the mobile operating systems have been moving towards restricting the user’s ability to do risky things (which also includes a lot of things proficient system operators want to do). That seems to be in response to consumer demand but I don’t think that we should enshrine the desires of the bottom 60% of users (in terms of system operation competence) as the standard to which all systems should be designed. This is not related to an “it should just work” attitude towards 3rd party software as there’s generally been a significant decrease in things like OS API stability over the past two decades (e.g. this rant of Spolsky’s). Users just think that anything they want to use should “just work” while anything they don’t care about should be marginally supported to minimize cost: the problem is that many people want different things to work.

                                On the other hand, some users don’t want the operating system reasoning for them (at least some of the time). I don’t want an operating system “smart” enough to prevent me from doing something stupid on my project boxes or in a VM I’m playing with especially if it’s just something that looks stupid to the OS but I’m doing for some (moderately good) reason.

                                1. 4

                                  You’re boxing this into a dichotomy of restricting user or not, but this isn’t the issue here.

                                  The issue is not about power, but about usability. You don’t need to block all failure paths. You need to communicate clearly and steer users towards success paths, so they never even come close to the dangerous commands by accident.

                                  1. 2

                                    I wouldn’t say this is really about power, so much as control though I tend to be a bit of a pedant about defining “power”.

                                    I think the communication here was reasonably good, though it could be improved. I think the real mistake Linus made was in choice of distribution. That is a real problem in the Linux community (and I think the one we should be focused upon here). I think the opportunity to improve communication here is marginal at best.

                                2. 2

                                  You need to have more empathy for the user.

                                  I do. I’m just saying that there’s nothing anyone could have done to prevent this except disallow even the root user from uninstalling xorg, and even then he could have just manually removed crucial files if he felt like it. OS maintainers are going to make mistakes occasionally. “Just don’t make mistakes ever” isn’t a viable strategy for avoiding things like this. What is a viable strategy is to build tools that detect and correct for errors like the one in Pop!_OS’s dependency tree. And that’s exactly what happened. He just disregarded the numerous protections from this bug that his OS afforded him.

                                  “From the user perspective,” the screen contained a list of packages that were about to be installed, a list of packages that were about to be uninstalled, and a message saying that the packages that were about to be uninstalled were essential to the operation of his computer and he should stop now rather than electing to uninstall those packages, along with a prompt that very deliberately requires you to have read that warning in order to proceed.

                                  The real solution would be for Linux to work properly in the first place, and actually install Steam instead of making excuses.

                                  Linux worked properly, apt even worked properly. Pop!_OS’s dependency tree was briefly broken. The package manager then recognized there was something wrong and explicitly told him he was about to uninstall his desktop and that he shouldn’t do it. It wasn’t “destroy my system.” That was me being (generously) 5% hyperbolic. In reality it was a warning that he was about to uninstall several essential packages including his desktop and a recommendation that he shouldn’t do this unless that was what he wanted to do. He was then required to enter a very specific message which was part of that warning, verbatim.

                                  Here’s the thing, no operating system has avoiding pushing out a bad or bugged update periodically. What’s great about Linus’s example is that Pop!_OS pushed out a bad update but the error was limited to one package, and the package manager was smart enough to stop Linus from breaking his system, and told him that it had stopped him from breaking his system. Linus then decided to use another tool that would allow him to break his system. This tool too was smart enough to notice that the package system had broken, and prevented him from breaking his system. He then deliberately bypassed these safeties and uninstalled gdm and xorg.

                                  What’s crucial to note here is that exactly nobody is making excuses for Pop!_OS — they messed up their dependency tree, yes — but also, this is a perfect example of all of these systems working exactly as intended. The package manager was smart enough to stop him from breaking his system even though the dependency tree was mangled, and he then overrode that and chose to break his system anyway. That’s more than can be said for many other operating systems. The tools he was using detected the error on Pop!_OS’s side and saved him

                                  It’s also worth noting that he literally didn’t brick his system, he could have fixed his machine if he’d just installed from the command line the same packages he had just uninstalled. Like, he didn’t actually break his system, he just uninstalled a few packages that were flagged as essential to stop newbies from uninstalling them because it might confuse them if they were uninstalled.

                                  1. 2

                                    Your assertion that nothing could be done is provably incorrect. Alpine doesn’t have this problem — by design — and it isn’t any less capable than Debian family. It’s a matter of design of tools’ UI, and this part of apt is a poor design.

                                    People don’t accidentally uninstall their OS when installing Steam on other OSes, because everywhere else “install a new user program” and “catastrophically alter the whole system” are separate commands.

                                    Users generally don’t read walls of text. In usability circles this is accepted, and UI designers account for that, instead of wishing they had better users. Users aren’t illiterate, they just value their time, and don’t spend it on reading things that seem to have low value or relevance. The low signal-to-noise ratio of apt’s message and surprising behavior is apt’s problem, not user’s reading problem. And “this is just the way the tool works” is not a justification for the design.

                              2. 1

                                The entire point of Linus’ challenge is that desktop Linux is full of barriers and traps for new users who don’t (yet) know what they’re doing.

                                I have understand this. The problem I have with some of the complains is they proclaim the one or other way is clear better during the challenge. This is a bit more obvious in the file extension example[0]. I completely understand the experience is frustrating. But the “it is frustrating for me because the system don’t behave like I expect from the knowledge of an other system” don’t mean this system is bad.

                                Yes systems try to adopt behavior from other systems to make it for users better to adopt. But this has it down side, because you can’t change a bad design after the user get used to it. In this example users get used to ignore errors and warnings and just “click ok”.

                                Explaining “well, it’s like that because you told it to florb the waggis instead of confeling rolizins, so it’s all your fault” may very well be technically correct, but it doesn’t change the fact that the OS hasn’t worked well for the user

                                I don’t want to imply they are dump or just don’t want to learn the system. It is frustrating, if a system don’t work the way you expect. I would like to see a discussion after the challenges with an expert explaining why the UI behaves different.

                                [0] Which I don’t write today, it’s late again

                                1. 5

                                  When doing usability evaluations, it’s normal to discount specific solutions offered by frustrated users, but never the problems they face.

                                  There were a few problems here:

                                  • Lack of hardware support. Sadly, that’s a broad problem, and difficult to fix.

                                  • User needed to download and run a script from GitHub. I think distros could improve here. For a random developer who wrote a script, it’s difficult to distribute the code in a better way. There’s a very high bar for getting something to be an official package, and hardly any other viable alternative. There are several different packaging formats (some tedious to build, some are controversial), a few unofficial package repositories, a few “app stores” for some distros. All this fragmentation is a lot of work and maintenance headache. It makes just dumping a script on GitHub very easy and attractive in comparison. It may not be a failing of any single person or distro, but it is a failing of “Linux desktop” in aggregate.

                                  • Browser and file manager did a crappy job by treating HTML with .sh extension as if it was a totally normal thing a user may want to do. The fight about file extensions has been lost in the ‘90s. I’ve been there, tweaking detection by magic bytes in my Directory Opus on AmigaOS, and fiddling with creator codes when copying stuff from classic MacOS. The reality is that file extensions exist, and are meaningful. No normal person stores web pages as “.sh” files.

                              3. 20

                                At the risk of being that condescending Linux user (which would be pretty awful since I’m not really using Linux anymore) my main takeaway from these videos is “don’t use hipster distros”.

                                Or, okay, hipster distros is where innovation happens. I get it, Gentoo was a hipster distro when I started using it, too. Okay, maybe don’t recommend hipster distros to beginners?

                                I saw Manjaro mentioned here. I tried Manjaro. It’s not a beginners’ distro. It’s great if you’re a burned out Arch user and you like Arch but you already know the instructions for setting up a display manager by heart and if you have to do it manually again you’re going to go insane. There’s a (small!) group of people who want that, I get it. But why anyone would recommend what is effectively Arch and a rat’s nest of bash scripts held together with duct tape to people who wouldn’t know where to begin debugging a broken Arch installation is beyond me. I mean the installer is so buggy that half the time what it leaves you with is basically a broken Arch installation for heaven’s sake! Its main value proposition is in a bunch of pre-installed software, all of which can be trivially installed on Ubuntu.

                                I haven’t used Pop!_OS but IMHO a distribution that can’t get Steam right, Steam being one of the most popular Linux packages, is just not a good distribution. It’s particularly unsettling when it’s a distro that’s supposed to have some level of commercial backing, and Steam is one of the most popular packages, so presumably one of the packages that ought to get the most testing. Hell even Debian has instructions that you can just copy-paste off their wiki without breaking anything. And the only reason why they’re “instructions”, not just apt install steam, is that – given their audience – the installation isn’t multilib by default.

                                There’s certainly a possibility that the problem here was in the proverbial space between the computer and the chair, sure. But if that’s the case again, maybe it’s just time we acknowledged that the way to get “better UX” (whatever that is this year) for Linux is not to ship Gnome with the umpteenth theme that looks like all other theme save for the colors and a few additional extensions. It’s safe to say that every combination of Gnome extensions has already been tried and that’s not where the magic usability dust is at. Until we figure it out, can we just go back to recommending Ubuntu, so that people get the same bad (I suppose?) UX, just on a distribution with more exposure (and, thus, testing) and support channels?

                                Also, it’s a little unsettling that the Linux community’s approach to usability hasn’t changed since the days of Mandrake, and is still stuck in the mentality of ESR’s ridiculous Aunt Tilly essay. Everyone raves about consistency and looking professional. Meanwhile, the most popular computer OS on the planet ships two control panels and looks like anime, and dragging things to the thrash bin in the second most popular OS on the planet (which has also been looking like anime for a few years now) either deletes them or ejects them, which doesn’t seem to deter anyone from using them. Over here in FOSS land, the UI has been sanitized for consistency and distraction-free visuals to the point where it looks like a frickin’ anime hospital, yet installing Steam (whether through the terminal or the interface it makes no difference – you can click “Yes” just as easily as you can type “Yes”) breaks the system. Well, yeah, this is what you get if you treat usability in terms of “how it looks” and preconceived notions about “how it’s used”, rather than real-life data on how it’s used. It’s not an irredeemable state of affairs, but it will stay unredeemed as long as all the debate is going to be strictly in terms of professional-looking/consistent/beautiful/minimal/distraction-free interfaces and the Unix philosophy.

                                1. 14

                                  The issue about Linux distro here is that they didn’t know the differences between them, why that matters, and that Linux isn’t one thing. Without a knowledgeable person to ask what to use, this is how they ended up with these different flavours. They also didn’t know about desktop environments, or how much influence they have over their Linux experience.

                                  It’s unfortunately a hard lens for many technical people to wrap their head around. Heck, we are starting to see people that don’t need to interact with hierarchical file systems anymore. Something natural to everyone here, but becoming a foreign concept to others.

                                  1. 6

                                    Certainly. My response was mostly in the context of an underlying stream of “Ubuntu hate” that’s pretty prevalent in the circles of the Linux community that also have a lot of advice to give about what the best results for “best Linux distro for gaming” should be. I know I’m going to be obtuse again but if the l33t h4x0rz in the Linux community could just get over themselves and default to Ubuntu whenever someone says “I’ve never touched Linux before, how can I try it?” a lot of these problems, and several distributions that are basically just Ubuntu with a few preinstalled programs and a custom theme, would be gone.

                                    There’s obviously a huge group of people who don’t know and are not interested in knowing what a distribution is, what their desktop environment is, and so on. As the Cheshire Cat would put it, then it doesn’t really matter which one they use, either, so they might as well use the one most people use, since (presumably) their bugs will be the shallowest.

                                    I know this releases all sorts of krakens (BUT MINT WORKS BETTER OUT OF THE BOX AND HAS A VERY CONSISTENT INTERFACE!!!1!!) but the competition is a system whose out-of-the-box experience includes Candy Crush, forced updates, a highly comprehensive range of pre-installed productivity apps of like ten titles, featuring such amazing tools like Paint 3D and a Calculator that made the Win32 calculator one of the most downloaded programs in history, two control panels and a dark theme that features white titlebars. I’m pretty sure any distribution that doesn’t throw you to a command prompt on first boot can top that.

                                    1. 1

                                      Oh, I totally agree, I was just clarifying that they did some googling to try and find something to use, and it’s how they ended up with this mess of difficulties.

                                    2. 2

                                      I think you cut to the heart of the matter here. I also think the question they asked initially (what’s the “best” gaming Linux distro) wasn’t well formed for what they actually wanted: what the easiest to configure was. To forestall the “that’s a Linux problem” crowd, that’s an Internet problem, not a Linux problem. If you Google (or ddg or whatever) the wrong question, you’re going to get the wrong answer.

                                      I think we have to resign ourselves to the fact that users generally don’t want to learn how to operate their systems and don’t want meaningful choices. Therefore, many users are not good candidates for a *nix.

                                    3. 2

                                      Until we figure it out, can we just go back to recommending Ubuntu, so that people get the same bad (I suppose?) UX, just on a distribution with more exposure (and, thus, testing) and support channels?

                                      I wish Ubuntu offered an easier flow for getting a distribution with the right drivers out of the gate. This is what Pop_OS! does (source):

                                      Pop!_OS comes in two versions: Intel/AMD and NVIDIA. This allows us to include different settings and the proprietary NVIDIA driver for NVIDIA systems, ensuring the best performance and use of CUDA tools one command away. On Oryx Pro systems, you can even switch between Intel and Nvidia graphics using a toggle in the top right corner of your screen.

                                      IMO this is superior to in Ubuntu where you need to follow complex instructions to get NVIDIA proprietary drivers: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia

                                      And you need to follow different instructions for AMD graphics

                                      Also if you buy a System76 laptop all the drivers for your computer come set up, no driver manager needed. With Ubuntu you can buy from Dell but not with the same variety of hardware as System76.

                                      I agree that Ubuntu is a good option but I would like to see it improve in these aspects before I would recommend it to a random non-power user who wants to play video games.

                                      1. 2

                                        I haven’t used Ubuntu in a while, and that page doesn’t help because the instructions look like they haven’t been updated since Ubuntu 12.04, but the way I remember it all you needed to do was go to “Additional Drivers” or whatever it was called, choose the proprietary driver, hit OK and reboot. Has that changed in the meantime? Last time I used a machine with an NVidia card I was running Arch and it was literally just pacman -S nvidia, please tell me Ubuntu didn’t make it more complicated than that!

                                        Also… is the overlap between “people who write CUDA code” and “people who can’t install the proprietary NVidia drivers” really that big? Or is this aimed at people using third-party CUDA applications, who know statistics but suck at computers in general (in which case I get the problem, I’ve taught a lot of engineers of the non-computer kind about Linux and… yeah).

                                        Also if you buy a System76 laptop all the drivers for your computer come set up, no driver manager needed.

                                        If you choose the “Ubuntu LTS” option when ordering, doesn’t it come with the right drivers preloaded? I mean… I get that Pop!_OS is their thing, but shipping a pre-installed but unconfigured OS is not exactly the kind of service I’d expect in that price range.

                                        1. 2

                                          For a novice user, do you expect them to know before they download the OS whether they have an nVidia or AMD GPU?

                                          I seem to recall that a big part of the motivation for the complex install process for the nVidia drivers was the GPL. The nVidia drivers contain a shim layer that is a derived work of the kernel (it hooks directly into kernel interfaces) and so must be released under a GPL-compatible license and of the proprietary drivers, and the proprietary driver itself, which is first developed on Windows and so is definitely not a derived work of the kernel and can be under any license. The proprietary drivers do not meet the conditions of the GPL and so you cannot distribute the kernel if you bundle it with the drivers. The GPL is not an EULA and so it’s completely fine to download the drivers and link them with your kernel. The GPL explicitly does not restrict use and so this is fine. But the result is something that you cannot redistribute.

                                          FreeBSD and Solaris distributions do not have this problem and so can ship the nVidia drivers if they wish (PC-BSD and Nexenta both did). I wonder how Pop!_OS gets around this. Is it by being small and hoping no one sues them?

                                        2. 1

                                          From what I can tell, steam isn’t even open source. And while you assert it to be one of the most popular Linux packages, I hadn’t even heard of it until this video came up in all the non-gaming tech news sites despite having used Linux for 25+ years. Was it even a Pop!OS package or were they installing an Ubuntu package on an Ubuntu derivative and assuming it’d just work?

                                          1. 8

                                            it’s proprietary, yeah, but i just feel like someone has to tell you that there are several orders of magnitude more Steam users than Linux desktop users, and it’s not only a package in Pop!_OS and Ubuntu, it’s a package in Debian and just about every distro for the last decade.

                                            i honestly have gotta applaud you for being productive enough a person to have never heard of Steam. if you look at the install data from popularity-contest, ignoring firmware and libraries (i.e. only looking at user-facing applications), Steam is the third most-installed non-free package on all Debian-based distros, behind unrar and rar. pkgstats.archlinux.de suggests Steam is installed on 36% of Arch Linux installations. Steam is not only an official package on Pop!_OS but one of the most installed packages on desktop Linux overall.

                                            1. 5

                                              And while you assert it to be one of the most popular Linux packages, I hadn’t even heard of it until this video came up in all the non-gaming tech news sites despite having used Linux for 25+ years

                                              Someone else already pointed out how popular it is but just for the record, any one of us is bound to not have heard about most of the things currently in existence, but that does not make them pop out of existence. Whether you’ve heard of it or not affects its popularity by exactly one person.

                                              Also, lots of useful applications that people want aren’t even open source, and a big selling point of Pop!_OS is that it takes less fiddling to get those working (e.g. NVidia’s proprietary drivers). An exercise similar to this one carried out with, say, Dragora Linux, would’ve probably been a lot shorter.

                                              Was it even a Pop!OS package or were they installing an Ubuntu package on an Ubuntu derivative and assuming it’d just work?

                                              Most of Pop!_OS is Ubuntu packages on an Ubuntu derivative. Does it matter what repo it came from as long as apt was okay installing it?

                                              Edit: to make the second point clear, Pop!_OS is basically Ubuntu with a few custom Gnome packages and a few other repackaged applications, most of the base system, and most of the user-facing applications, are otherwise identical to the Ubuntu packages (they’re probably rebuilt from deb-srcs). No idea if what they tried to install was one of the packages System76 actually repackages, or basically the same as in Ubuntu, but it came from their “official” channel. I.e. they didn’t grab the Ubuntu package off the Internet, dpkg -i it and proceed to wonder why it doesn’t work, they just did apt-get install steam, so yes, it’s a Pop!_OS package.

                                          2. 10

                                            I mean, I have Big Opinions® on the subject, but my tl;dr is that Linux isn’t Windows, we shouldn’t give false expectations, have our own identity, etc. etc. But….

                                            So he typed in “Yes, do as I say!” and his installation was broken. He claimed later: “the thinks that I did are not entirely ridiculous or unreasonable”. He ignored all warnings and “dictated the computer” “Yes, do as I say!”, how is this not a clear user error[0]?

                                            I mean, the system should refuse to do that. Alpine’s and others refuse to allow the system to enter a boned state. One of the Alpine developers was rightly criticizing Debian for this issue in apt, citing it as one of the reasons why they stopped using Debian. The attention to the problem Linus gave in an embarrassing light was the push finally needed to fix it.

                                            1. 5

                                              Knowing how Internet guides work, now all guides will say “apt --allow-solver-remove-essential <do dangerous stuff> instead of “Type Yes, do as I say at the prompt”.

                                              1. 3

                                                I like luke’s perspective that some distros should do different things. I think it’s reasonable for arch to be a ‘power user distro’ that is willing to bork itself. But PopOS is ‘an operating system for STEM and creative professionals’, so it probably should have some safeguards.

                                                That being said I don’t think arch should ever be recommended to a brand new user. Linus shouldn’t even be on arch because 1) there should be better resources for picking a good distro for absolute beginners and 2) PopOS never should have that broken of a steam package in the first place.

                                                1. 1

                                                  That being said I don’t think arch should ever be recommended to a brand new user.

                                                  I would qualify this; there are many users for whom arch was there first distro and it went great, but the key thing is these are not your typical computer user; they are people who are technically minded (not necessarily with deep deep knowledge of anything in particular, but they’re probably at least the person their friends ask for help), are up to and interested in learning about the system, and generally have been given some idea of what they’re getting into. That is to say, arch is definitely for “power users,” but that set includes some users who have not actually used Linux before.

                                                  For my part, Arch was the first distro that was actually reliable enough for me to do more than play with; I spent a year or so fussing with other stuff while dual booting windows, and Arch is the first one that actually worked well enough for me wipe the windows partition and stay. This was 15 years ago and I haven’t left, though I keep eyeing NixOS these days.

                                                  I think at the time folks still had fresh memories of before Linux desktop environments were even a thing, and there was this mentality that the barrier to entry was mostly around user interfaces. People hadn’t really internalized the fact that Linux had caught up decently well even by then (this was KDE 3.x era), but the problem was stuff needed to work better out of the box, and it needed to not break whenever you upgraded to the next release of the distro.

                                                2. 1

                                                  I mean, the system should refuse to do that

                                                  The system had refused to do that. Then the user has told the system to shut up and do as he said. You could argue that this should not be possible, but if you are in the situation where you have fucked up your packages? The way around should be present within the package manager, because without it you need to do the way around without your package manager by deleting files and changing the database file.

                                                3. 7

                                                  To answer some of your questions:

                                                  First so similar we don’t get the issue, because under Windows there is no package manager accessible for somehow third party software.

                                                  Technically not true; there is a Windows package manager and has been for a long time, and that’s the Windows Installer (MSI files). There’s also APIs and supported methods for 3rd-party installers to register an install with the system. What’s historically been missing are official package repositories for installing and updating applications (ala. APT, RPM, etc… repos). That’s slowly changing with the Microsoft Store, winget, and others, but this is an area Linux has long been very far ahead.

                                                  So lets assume there is an windows update which removes the wrong file and breaks your system. On the install the update would remove the wrong file and breaks your system.

                                                  This is incredibly rare. I won’t claim it hasn’t happened, but more common (while still very rare) is an update which causes a bluescreen on boot or exposes a bug in a critical system process. In either case, we’re talking very rare, but I’d suggest that’s true of Linux too.

                                                  Other example the steam installer manage to have a bug with removes some necessary files from your Windows installation. Is there anything Windows protect you from this bug[1]?

                                                  Yes, several things, and this is a genuine major contrast to Linux. Off the top of my head:

                                                  1. Window system files cannot be modified by default even with administrator privileges. You can’t simply run an elevated Command Prompt and run the equivalent of rm -rf C:\Windows. That’s because most operating system files are both owned and only writeable by a special account (TrustedInstaller). You can still modify or delete these files, but you have to jump through several hoops. At a minimum, you need administrator privileges (ala. root), and would have to take ownership of the file(s) of interest and subsequently grant yourself the relevant privileges. There are other ways you could gain the relevant access, but the point is it’s not a thing you could do by accident. That’s similarly true for installers, which also would need to take the same approach.

                                                  2. Windows has long had numerous recovery options for when things go pear shaped. Notable ones include Safe Mode and its various permutations (since forever), the ability to uninstall operating system updates (also forever), System Restore (since XP?), System Reset (Windows 10?), and a dedicated recovery partition with a minimal Windows 10 installation to serve as a recovery environment wholly independent of the main operating system. Obviously, none of these are a guarantee for recovery of an appropriately damaged system, but it’s long been the case that Microsoft has implemented numerous recovery/rollback mechanisms.

                                                  On Linux, it’s usually limited to one or more previous kernel versions, and that’s about it? Yes, there’s single-user mode, but that just drops you into a root shell, which is wholly unsuitable for non-experts to use.

                                                  1. 1

                                                    there is a Windows package manager and has been for a long time, and that’s the Windows Installer (MSI files). There’s also APIs and supported methods for 3rd-party installers to register an install with the system.

                                                    I believe we use the same words for different thinks. When I talk about a package manager I mean a system witch provides packages and resolves dependencies. If I understand your comment correct an MSI file installs Software and registers the Software. But there is no way a MSI file claims is it incompatible with version 3.6 of explorer. So that on install the installer solve the dependence graph and present what he need to install and remove.

                                                    On Linux, it’s usually limited to one or more previous kernel versions, and that’s about it?

                                                    This depends on your system. On debian based OS there are the packages still in the package cache. So you can easy downgrade. There are other options witch allows easy recovery from such bugs. There are most of the time not setup by default and still require some skill to solve your problem.

                                                    1. 2

                                                      I believe we use the same words for different thinks. When I talk about a package manager I mean a system witch provides packages and resolves dependencies. If I understand your comment correct an MSI file installs Software and registers the Software. But there is no way a MSI file claims is it incompatible with version 3.6 of explorer. So that on install the installer solve the dependence graph and present what he need to install and remove.

                                                      It’s true that MSI (and most competing technologies) generally will not compute and resolve a dependency graph for package installation, but it’s also worth noting this is in part because it’s far less applicable to Windows systems. As the operating system is a single unified system, versus hundreds or even thousands of discrete packages sourced from different projects and maintainers, it’s unusual for an application on Windows to have many dependencies. So in this respect the packaging tools functionality is very much in response to the needs of the underlying platform.

                                                      A system with the same sophistication for dependency resolution as the likes of Apt or Yum is simply just not as useful on Windows. Of course, that’s a separate argument from a system which provides a centralised catalogue of software ala. Linux software repositories. That’s an area Windows is very much still playing catch-up on.

                                                      This depends on your system. On debian based OS there are the packages still in the package cache. So you can easy downgrade. There are other options witch allows easy recovery from such bugs. There are most of the time not setup by default and still require some skill to solve your problem.

                                                      I think we have different definitions of easy here. Typically such an approach would minimally involve various command-line invocations to downgrade the package(s), potentially various dependency packages, and relying on cached package installers which could be removed at any time is less than ideal. Given the upstream repositories usually don’t to my knowledge maintain older package versions, once the cache is cleaned, you’re going to be in trouble. The point I’d make is that if something goes wrong with package installation that breaks your system, on most Linux distributions the facilities to provide automated or simple rollback are fairly minimal.

                                                      1. 1

                                                        As the operating system is a single unified system, versus hundreds or even thousands of discrete packages sourced from different projects and maintainers

                                                        I would doubt that Windows itself has/is no modular system. The Updater itself must also have some sort of dependency management. FreeBSD as an other unified OS is currently working on a package management system for there base system.

                                                        A system with the same sophistication for dependency resolution as the likes of Apt or Yum is simply just not as useful on Windows

                                                        Why not? Currently all software ships it dependencies on there own and updater have to implemented in all software. Maybe not with one big graph for all software, but with a graph for each installed program and with a duplicate elimination.

                                                        1. 1

                                                          I would doubt that Windows itself has/is no modular system. The Updater itself must also have some sort of dependency management. FreeBSD as an other unified OS is currently working on a package management system for there base system.

                                                          You’re right, Windows itself is very modular these days, but the system used for managing those modules and their updates is independent of other installers (inc. MSI). There’s some logic to this, given the updates are distributed as a single cumulative bundle, and MS clearly wanted to design something that met Windows needs, not necessarily broader generalised package dependency handling requirements. The granularity is also probably wrong for a more general solution (it’s excessively granular).

                                                          On my system, there’s around ~14,600 discrete components, going off the WinSxS directory.

                                                          Why not? Currently all software ships it dependencies on there own and updater have to implemented in all software. Maybe not with one big graph for all software, but with a graph for each installed program and with a duplicate elimination.

                                                          Several reasons. One is that most Windows software is predominantly relying on Windows APIs which are already present, so there’s no need to install a multitude of libraries to provide required APIs as is often the case on Linux. They’re already there.

                                                          Where there are 3rd-party dependencies, they’re usually a small minority of the application size, and the fact that software on Windows is much more likely to be closed source means it’s harder to standardise on a given version of a library. So if you were to try and unbundle 3rd-party dependencies and have them installed by package manager from a central repository, you’d also need to handle multiple shared library versions in many cases.

                                                          That’s a soluble problem, but it’s complex, and it’s unclear if the extra complexity is worth it relative to the problem being solved. I suspect the actual space savings would be minimal for the vast majority of systems.

                                                          I’m not saying it’s a bad idea, just that it’s solving a problem I’d argue is far less significant than in *nix land. Again, all of this is independent of centralised package repositories, as we’re starting to see with winget, scoop, choco, etc …

                                                    2. 1

                                                      On Linux, it’s usually limited to one or more previous kernel versions, and that’s about it?

                                                      https://documentation.suse.com/sles/11-SP4/html/SLES-all/cha-snapper.html

                                                      By default Snapper and Btrfs on SUSE Linux Enterprise Server are set up to serve as an “undo tool” for system changes made with YaST and zypper. Before and after running a YaST module or zypper, a snapshot is created.

                                                      1. 1

                                                        Excellent. Like ECC RAM, those who are already expert enough to devise ways to do the task are given the tools.

                                                        This doesn’t happen on mainstream user- friendliness- oriented distros.

                                                        I do wonder about a Nix- based highly usable distribution. All the tools are there to implement these protections, lacking only a general user interface.

                                                        1. 1

                                                          I think that’s an unfair summary. Implementing this properly takes time and few distros have even started to default to filesystems where this is possible. It’s coming desktops too: https://fedoraproject.org/wiki/Changes/BtrfsWithFullSystemSnapshots

                                                          1. 1

                                                            Of course it’s coming.

                                                            I still think the criticisms are valid and help drive these technologies arriving for the common user.

                                                        2. 1

                                                          That’s pretty cool. I hope it becomes more widely accessible on end-user distributions (I expect SLES is a pretty tiny minority of the overall desktop/laptop Linux userbase).

                                                      2. 1

                                                        bug

                                                        It was a good old package conflict, wasn’t it? The normal way this happens is if you try to install a package from a foreign distro.

                                                        Different distros have different versions of packages, so unless the foreign package’s dependencies happens to line up with every installed package, the only way to install the foreign package is going to be to uninstall everything that depens on a conflicting version of something, which can be quite much.

                                                        If so, I wouldn’t call it a “bug”, since that’s a term used on software – the package manager itself, not its input. For user expectations, this means that bugs are fixable, whereas package conflicts (at least of the self inflicted kind) are not. The software can only heuristically refuse to do stupid things.

                                                      1. 4

                                                        There’s no doubt this is a bad look, but it’s not a BitLocker bypass. The vulnerability has absolutely nothing to do with BitLocker, and will work on systems meeting the constraints the author specifies with or without BitLocker. How the claim there’s any relation is there, let alone in the headline, I have absolutely no idea.

                                                        Addressing the vulnerability itself, it goes to show the perils of running any more code than you absolutely have to in a privileged context, especially in security critical code like the login/lock screen. It’s a shame, but unsurprising, that so much of these issues come from accessibility related functionality. Not because it’s necessarily bad code, but it’s inherently complex (magnifiers, narrators, on-screen keyboards, internationalization, etc …), and complexity tends to be the enemy of security.

                                                        I saw just recently XScreenSaver 6.0 has been released:

                                                        These changes greatly reduce the amount of code running in the “critical” section: the part of the code where a crash would cause the screen to unlock. That critical section is now only around 1,800 lines of code, a reduction of roughly 87%.

                                                        How much code do you think is running in a privileged context on the Windows lockscreen for which bugs could cause a security issue. It’s surely many 10’s of thousands conservatively …

                                                        1. 20

                                                          I’m fairly privacy-focused, but the hard-line “any unique data sent anywhere is a form of spyware” philosophy really puts me off (although I accept that others disagree).

                                                          Some of these issues are complex and involve difficult tradeoffs. Take Firefox’s use of Google Safe Browsing service, for example. The site’s Firefox article says that this is spyware and Allegedly used to protect you from “phishing” websites.

                                                          It’s unhelpful to ignore that this service absolutely will protect some users from phishing and malware, things that for an individual can cause much larger privacy breaches and significant harm than even the most snoopy mainstream browser.

                                                          Nuanced (but more difficult) questions include:

                                                          • Is the privacy cost of sending an IP address, URL 32-bit hash prefix, and single-purpose local installation identifier to a Google service worth the benefit of being able to use their crowdsourced malware database?
                                                          • Even if it is worth it, can we do better?
                                                          • When Firefox accesses this API, does Google follow the Chrome policy of only keeping the IP and identifier for a request for up to 30 days? If yes, is that policy acceptable?
                                                          • If Firefox switched to a different default service, would it be as effective (unfortunately Google/Chrome has scale on their side here), and would it be better privacy-wise?
                                                          • Are users sufficiently aware of this service and what data is sent? This is a hard one, because explaining complex things is hard - it’s much harder than messages like “Firefox respects your privacy” or “Firefox is spyware”, which are both equally simple and uncomplex. (I personally think this explanation is pretty good, though I noted that the link to Google’s privacy policy didn’t seem to have an answer about their use of Safe Browsing data - the only info I could find was in the Chrome whitepaper linked above.)
                                                          1. 14

                                                            I’m also a little bothered by the simplistic thinking that seems to be behind this site. For instance, a common criticism against Pale Moon is that it’s basically an old version of Firefox with many of the fixed CVEs still in there. Putting that under “best privacy” is very misleading and potentially dangerous, because anyone who really absolutely needs high levels of privacy privacy (say, dissidents under a dictatorship) are at a higher risk running such a browser because it’s much easier to attack. You don’t even need zero-days.

                                                            1. 11

                                                              Second this. I’ve spent and continue to spend a ridiculous amount of time ensuring my personal setup and the systems I administer are configured to minimise telemetry and the usage of (mis)features which infringe on my privacy or that of their users. However, there are trade-offs. Labeling anything which may entail some privacy risk as “spyware” is unhelpful. That’s particularly the case for genuine security features which implicitly need to communicate some information which may be personally identifiable.

                                                              There’s always going to be a grey area, but as the parent points out, features like Safe Browsing which obfuscate the download information via a cryptographic hash from my PoV have a good-faith design to minimise the privacy implications while providing the security benefit, acknowledging that the nature of the feature makes communicating some information necessary.

                                                              • Do I personally feel I need that feature? No.
                                                              • Is it of clear benefit to the vast majority of users? Yes, absolutely.
                                                              • Should it be disabled in Tor Browser? Yes, definitely.
                                                              • Do I think it’s fair to describe it as spyware? No, that’s a distortion of the intent.

                                                              It’s possible for a given system to have privacy risks and provide security benefits.

                                                            1. 6

                                                              I hope PowerShell gets more traction on non-Windows platforms as it really is a fantastic shell, particularly given it’s now open-source. However, much of its power is inherently very dependent on PowerShell native commands; if you’re just calling native binaries which output text then a huge amount of its benefit is lost as you’re no longer dealing with an object pipeline. Once you’re back to parsing raw text output, frankly, the native *nix tools are superior (grep, sed, awk, cut, etc …).

                                                              PowerShell up-take on Windows was less of an issue, as coming from cmd and the sprawl of completely inconsistent and usually terrible native CLI tools, it’s not like there was an existing ecosystem anyone not suffering from Stockholm Syndrome actually liked. Also, investing in PowerShell was filling enormous gaps in Windows CLI support, rather than reinventing existing tools. But for *nix platforms, the tools do exist, and they are solid. The syntax may be inconsistent and arcane, but the tools are there.

                                                              This is a core rationale behind PowerShell Crescendo. Instead of a likely futile attempt to achieve parity with the existing tooling, leverage it by intelligently wrapping it. Whether it actually can achieve what it aims I have no idea …

                                                              1. 3

                                                                I think the biggest mistake PowerShell made was the name. It’s a great interactive scripting environment but it’s a fairly mediocre shell. The main job that a shell does is run external programs and that’s the thing that PowerShell is worst at. It also takes a long time to start (it needs a .NET VM and on my laptop it takes 15 seconds to start a new shell with the PowerShell modules that I’ve got installed), which is a deal breaker for something that you’d use for scripts that don’t run in the same process as the interactive environment (it’s not great for interactive use where opening a new Terminal tab is slow).

                                                                On the other hand, if everything that you want to do is supported natively by PowerShell and you never need to invoke anything external, it’s a nice environment. The second biggest mistake PowerShell made was to standardise verb-noun instead of noun-verb, which made tab completion less useful than it should be.

                                                                1. 2

                                                                  It also takes a long time to start (it needs a .NET VM and on my laptop it takes 15 seconds to start a new shell with the PowerShell modules that I’ve got installed)

                                                                  Are you using Windows PowerShell 5.1 or PowerShell Core (6+)? The performance difference is dramatic. On my fairly heavy profile with numerous modules:

                                                                  PowerShell 5.1
                                                                  Loading personal and system profiles took 5096ms.
                                                                  
                                                                  PowerShell 7.1
                                                                  Loading personal and system profiles took 1509ms.
                                                                  

                                                                  Some code-paths are particularly pathological. I’ve seen instances where 5.1 takes 20+ seconds while 7.1 remains at <3 seconds.

                                                                  1. 1

                                                                    You made me curious, so I installed 7.1 and copied my profile.ps1 to the new location. It looks as if Windows PowerShell 5.1 is now faster, it takes only 1.5-2s to start. 7.1 takes 1.2-1.3s to start. Bash in WSL, including a very complex .bashrc and starting ssh-agent is still perceptibly faster than both, /bin/sh (a lightweight statically linked POSIX shell on FreeBSD, dash on Ubuntu) is significantly faster than either.

                                                              1. 16

                                                                If you’re at the point where you need to parse flags, like in this example, you’re no longer writing “a simple script”: it’s now a full fledged program. Do yourself a favor and use an actual programming language. Yes, Bash can technically do a lot, but as someone who works on a project centered around 100k+ lines of Bash, it’s going to slow you down and introduce its own terrible categories of bugs.

                                                                1. 11

                                                                  I struggle with this a lot, because there’s definitely some truth to this. For me the test is usually “is the primary role of this script/app to just call other binaries”. If the answer is yes I lean to shell scripts, as I’m unconvinced writing e.g. Python with subprocess calls, or C# with System.Diagnostics.Process, etc … represents an improvement. It’s likely to be quite a bit longer with all the extra process management, with minimal gain if you’re writing Bash in a reasonably disciplined way (e.g. use shellcheck, consider shfmt).

                                                                  Part of that discipline for me is the exact “boilerplate” which a template like the linked article provides.

                                                                  EDIT: Obviously once we’re talking 100k+ or even 10k+ lines of Bash we’re in an entirely different realm and op has my deepest sympathies.

                                                                  1. 7

                                                                    Wow really 100K lines of bash? What does it do?

                                                                    I keep track of such large programs here:

                                                                    https://github.com/oilshell/oil/wiki/The-Biggest-Shell-Programs-in-the-World

                                                                    There are collections of scripts that are more than 100K lines for sure, but single programs seeminlyg top out around 20-30K… I’d be interested to learn otherwise!

                                                                  1. 7

                                                                    About a hundred lines, including argument parsing, terminal color management, and helper functions.

                                                                    I suspect a lot of folks will click the title out of latent anxiety about their own quick and dirty scripts. There’s nothing wrong with quick and dirty scripts.

                                                                    Perhaps a hundred lines of throat clearing is “minimal” for internal tools used by larger teams, for shell scripts shipped as primary user or installation interfaces of products, or to meet standing policies about user interfaces. The “minimum” for most scripts I’ve written or helped maintain is a valid shebang. Maybe set -e.

                                                                    1. 7

                                                                      Agreed. I also think that when you’re writing scripts for consuption by anyone that isn’t just you (like an internal tool), you should think about removing Bash-isms to avoid issues with different shells.

                                                                      1. 6

                                                                        He does indicate it’s specifically meant to be run by bash via a shebang. Unless you really need to support multiple shells, the extra pain is just not worth it; bash is pervasive enough it’s a fair baseline for the vast majority of typical use-cases.

                                                                        The alternative is lowest common denominator POSIX and that’s hard work. Sometimes it’s necessary, but it’s not pretty (not that bash is either, but it’s certainly going to be more verbose). The salt-bootstrap.sh script is a nice example of the approach.

                                                                      2. 3

                                                                        And set -u, so undefined variables don’t silently work (typos make this bad), and set -o pipefail so failed piped programs stop execution. And then -euo pipefail in every subshell, and don’t forget to split up VAR = $(set -e && dosomething) and export VAR because export swallows failed exit codes…

                                                                        It’s really far too difficult to write correct bash, mostly it’s best to just avoid it.

                                                                      1. 8

                                                                        I believe 2>&1 >/dev/null could be shortened to >&/dev/null?

                                                                        Also I wonder why un-trap in cleanup? Is the script expected to call the func not only at exit?

                                                                        Also, try using shellcheck whenever possible.

                                                                        1. 5

                                                                          Guarding against recursion. If something goes wrong in cleanup() you’ll trap again and eventually bash itself will blow its stack.

                                                                          1. 4

                                                                            Shellcheck is the golden ticket. I use it on all my scripts and don’t consider code done until it passes. It’s taught me so much!

                                                                          1. 6

                                                                            This is not dissimilar to my own bash-script-template project. In fact, some parts of it are … really similar?

                                                                            EDIT: Oh, I see my repo was linked near the bottom. So I guess some inspiration was drawn!

                                                                            1. 20

                                                                              I thought this was going to be about CPU activity but it’s regarding the network activity from each system. Unsurprisingly Windows is more “chatty”, but to be honest, less so than I expected and there aren’t really any surprises. A few notes from skimming the article as to some connections the author seems unsure about:

                                                                              home.lan
                                                                              This is presumably the default DNS domain for Windows when not connected to a corporate domain. The Windows DNS client appends the primary DNS domain of the system to unqualified queries (see DNS devolution for the grotty details).

                                                                              As for the queries, wpad will be Web Proxy Auto-Discovery (which is a security disaster-zone, but that’s another story), the ldap one is presumably some sort of probe for an AD domain controller, and the rest I’m guessing are captive portal or DNS hijacking detection, which could be either Windows or Chrome that’s responsible.

                                                                              Intel
                                                                              No chance this is Windows itself. Pretty much guaranteed to be the Intel graphics driver, specifically the Intel Graphics Command Center which was probably automatically installed.

                                                                              Microsoft
                                                                              The 4team.biz domains are definitely not Microsoft but some 3rd-party vendor of software within that ecosystem. So it turns out there’s at least one legitimate company out there that actually uses a .biz domain!

                                                                              The rest are largely telemetry, error reporting, Windows updates, PKI updates (CAs, CRLs, etc …), and various miscellaneous gunk probably from the interactive tiles on the Start Menu. Microsoft actually does a half-decent job these days of documenting these endpoints. A few potentially helpful links:

                                                                              1. 2

                                                                                There was another thing that surprised me, namely that Windows appears to connect to a Spotify-owned domain. I asked the author if he had installed Spotify, which he hadn’t.

                                                                                1. 4

                                                                                  Isn’t there a tile for Spotify in W10 by default?

                                                                                2. 2

                                                                                  I thought that a bunch of these are moving to dns-over-https with some built-in resolution servers which would then completely bypass his private dns server?

                                                                                1. 3

                                                                                  Yelling at SharePoint.

                                                                                  1. 1

                                                                                    I used to use OneNote, but these days I’m a huge fan of Dynalist.

                                                                                    1. 4

                                                                                      Does anybody know if or when the incompatibility between WSL2 and VMware will be resolved? I still find VMware Workstation very useful and it would be great to have both. (Relevant WSL2 FAQ for anybody who wasn’t aware of this.)

                                                                                      1. 3

                                                                                        VMware Workstation 16 is currently in open technical preview and is compatible with Hyper-V (which WSL2, Device Guard, Credential Guard, etc … require). You need to be running Insider build 19041 or newer, which Windows 10 2004 meets but the current 1909 release does not. They recently posted an updated technical preview build as well.

                                                                                        1. 1

                                                                                          If I could run virtualbox or VMware instead of wsl I would do. More options for networking, device passthrough, sharing folders. The advantage of wsl for me is that it’s a windows feature, can be managed via policies and not “something extra” to install. Wsl2 feels like just a VM without the management hassle.

                                                                                        1. 1

                                                                                          Not sure where mine fits in the scheme of typical VS Code customisation but it works well for me.

                                                                                          1. 1

                                                                                            At ~850 lines my vimrc is admittedly a bit ridiculous. But it’s well commented and so maybe someone will find a few nice tweaks for their own setup in there :-)

                                                                                            1. 8

                                                                                              If you find yourself writing (or debugging) bash (or sh) you should use shellcheck as well https://www.shellcheck.net/

                                                                                              1. 3

                                                                                                Great advice, shellcheck is awesome.

                                                                                                I always use shellcheck, and all of my bash scripts start with

                                                                                                #!/bin/bash
                                                                                                
                                                                                                set -euo pipefail
                                                                                                

                                                                                                Together those two things save me from so much frustration!

                                                                                                1. 4

                                                                                                  I can’t help but point out that should be #!/usr/bin/env bash. But good work on the set statement. Sorry ;)

                                                                                                  1. 3

                                                                                                    Well, I guess it depends. I mostly write shell scripts for my job as sysadmin, and I prefer to use /bin/bash for those. Not all systems are fully under my control, and /usr/bin/env bash would mean that I’d be at mercy of the PATH variable. Since I know that I have system bash available in /bin/bash I prefer to hardcode that.

                                                                                                    For more portable scripts, or for scripts in other languages, yes, then I’d use /usr/bin/env.

                                                                                                    1. 3

                                                                                                      Another thing I like to do is to fail fast; avoid trying to do error handling unless there’s a good reason to do so. It’s often just as good to just fail hard and log what went wrong.

                                                                                                      I sometimes add a trap for the ERR signal, which causes bash to call the trap function upon exit:

                                                                                                      A trap on ERR, if set, is executed before the shell exits.

                                                                                                      Something like this:

                                                                                                      declare -ri             \
                                                                                                              EXIT_SUCCESS=0  \
                                                                                                              EXIT_WARNING=1  \
                                                                                                              EXIT_CRITICAL=2 \
                                                                                                              EXIT_UNKNOWN=3
                                                                                                      
                                                                                                      declare -i EXIT=$EXIT_UNKNOWN
                                                                                                      declare STATUS='UNKNOWN - Exit before logic'
                                                                                                      
                                                                                                      function _exit() {
                                                                                                        # Status and quit
                                                                                                        echo "${STATUS}"
                                                                                                        exit $EXIT
                                                                                                      }
                                                                                                      
                                                                                                      trap _exit ERR
                                                                                                      

                                                                                                      This ensures that no matter how the script exits my exit handler function will always get called. The particular snippet above is from an Icinga check I’m writing right now :)

                                                                                                      1. 1

                                                                                                        From my live NixOS system:

                                                                                                        bb010g ~ % file /{,usr/}bin/bash
                                                                                                        /bin/bash:     cannot open `/bin/bash' (No such file or directory)
                                                                                                        /usr/bin/bash: cannot open `/usr/bin/bash' (No such file or directory)
                                                                                                        
                                                                                                        1. 2

                                                                                                          Yes, which is why I wrote that for more portable scripts I’d use /usr/bin/env ;)

                                                                                                          (…that does exist, doesn’t it?)

                                                                                                          I’ve played around with Guix a bit, same thing there. But the systems I manage are neither GuixSD nor NixOS, so my approach works just fine for my purposes.

                                                                                                1. 2

                                                                                                  PowerShell has a horrifying number of “gotchas” you need to be aware of for complex scripts/modules.

                                                                                                  1. 5

                                                                                                    Hey, it beats teamcity.

                                                                                                    1. 1

                                                                                                      What’s wrong with TC? My own experience has always been pretty great, an experience I’ve had with most of their tools, and so I’m curious to hear other viewpoints.

                                                                                                      1. 2

                                                                                                        To be fair, I had a bad experience with it but it was ~5 years ago and could easily be attributed to being green in the field. I just found it hard to understand and easy to accidentaly break things with it.

                                                                                                    1. 2

                                                                                                      I’m an Australian and am in something close to a state of despair between this legislation passing with the help of our main and utterly useless opposition, and alongside several unrelated non-technology political issues. Against seemingly all expert advice, both legal and technical (excluding of course the LEOs lobbying for it), the legislation was passed without the time for the vast majority of the politicians voting on it to even read it, spurred by a scare campaign by the government that these laws are necessary to prevent terrorism over the Christmas period.

                                                                                                      And this all comes shortly after Huawei was blocked from participating in the build of the national 5G network on the grounds that their technology might be backdoored by the Chinese government …