1. 7

    I have mixed feelings about this. No longer mixed because of how yeah, this just seems very disrespectful.

    Mostly, it can be summed up as “can you just not?”

    This project is famous because of why it came to be, and who created it. It was the result of Terry’s mental illness which was also the unfortunate cause of his death. Trying to piggyback on it feels incredibly disingenuous and almost disrespectful, especially given the circumstances around its creation.

    Let it rest with Terry.

    EDIT: I completely forgot about that description - “A TempleOS distro for heretics” - this just sounds incredibly disrespectful.

    1. 25

      Trying to piggyback on it feels incredibly disingenuous and almost disrespectful, especially given the circumstances around its creation. Let it rest with Terry.

      I think that on contrary, we should use, study and think about the lessons Terry gave us in form of the Temple OS. There is nothing disrespectful in trying to build on his legacy.

      1. 16

        I agree - Terry didn’t take his software with him to the grave - he shared it with the world and made it public domain.

        1. 9

          Gotta agree here as well. I can appreciate that great care should be taken to give proper respect to the nature of his illness and personal circumstances but that said I think his work has a LOT of interesting technical details that could be studied and might inspire others.

        2. 13

          So in your opinion, Terry’s work should just be left to wither and die, out of respect for him. Would this be true if he hadn’t been mentally ill? Would it be true if he hadn’t died?

          You assume the author of Shrine has forked TempleOS to acquire fame. What if he didn’t?

          1. 10

            It was released before Terry’s passing

            1. 1

              That doesn’t change much.

              1. 5

                Sincere question here - should the entire work product of someone who became mentally ill and whose work might potentially contain some aspects which were a product of delusions they experienced as a result be excluded from future scholarship out of respect?

                (The answer may very well be yes - I’m just trying to understand the point you’re making.)

            2. 6

              I’m confused here that this is upvoted so high. Why is it disrespectful? It’s public domain fork of a public domain project. Why do people need to be so uptight about everything?

              1.  

                I’m absolutely in favor of studying TempleOS, it was clearly meant for that. Just, yes, this particular approach does seem intentionally disrespectful. It’s a matter of opinion; I certainly won’t try to stop them from doing that, but that’s my opinion.

                1.  

                  It was the result of Terry’s mental illness

                  I think it is a result of Terry’s genius and mental illness. And projects like this that make it easier to utilize and explore Terry’s great work.

                1. 11

                  The ads themselves may be quite innocent. It is the slippery slope that they may lead to. If Canonical is putting ads into motd, why wouldn’t the author of tmux (I am just taking a completely random utility) start every tmux session with an advertisement of their Patreon page and perhaps advertisements for their highest-paying Patreon supporters? If this would really get of the ground, this can only lead to two equally bad outcomes: (1) many useful programs will get littered with ads; or (2) Canonical will act as a gatekeeper and patch out all such ads, which would not result in a level playing field.

                  Note that there is some precedence in free software. GNU (!) parallel displays this annoying message:

                  Academic tradition requires you to cite works you base your article on.
                  If you use programs that use GNU Parallel to process data for an article in a
                  scientific publication, please cite:
                  
                    O. Tange (2018): GNU Parallel 2018, Mar 2018, ISBN 9781387509881,
                    DOI https://doi.org/10.5281/zenodo.1146014
                  
                  This helps funding further development; AND IT WON'T COST YOU A CENT.
                  If you pay 10000 EUR you should feel free to use GNU Parallel without citing.
                  

                  This pollutes the terminal. Secondly, why do I have to cite them and not every other piece of software that is only marginally relevant to the actual research (Rust, gcc, coreutils, …)? Finally, why does the GNU project even permit this, since you are not allowed to impose additional restrictions on use/distribution beyond the GPL?

                  1. 7

                    I tend to over-cite free software that I happily use. But in the case of gnu parallel, the ad is so obnoxious that I don’t, as a matter of principle.

                    1. 5

                      why wouldn’t the author of tmux (I am just taking a completely random utility) start every tmux session with …

                      Or why wouldn’t the author of vim start every vim session with instructions to help his favorite charity? Oh. Wait…

                      he’s done that for many years without any blowback. Every packager I’ve noticed chooses to leave it intact even though it would be trivial for a packager to change. Try typing :help uganda or :help iccf on any vim command prompt.

                      I guess I’m not so much saying your “slippery slope” is wrong. I’m really saying that the slope has been in place forever, it’s no more slippery than it used to be, and the results haven’t been bad. Neither your (1) nor your (2) only possible “equally bad outcomes” have happened.

                      1. 5

                        The vim message is non-obstrusive for normal usage of vim (opening an already existing file). Moreover, it disappears silently, without leaving rubbish on your terminal. The GNU parallel message pollutes your output stream in an unacceptable way. It is very different. Thus, nobody complains about vim’s ad (and most people won’t notice, since it is rare to launch vim without an argument), and many people, rightfully, complain about GNU parallel.

                        1. 5

                          Not being a parallel user, I wasn’t aware of that one prior to reading the comment I replied to. Does it not have the decency to spit that to stderr instead of stdout? That’s kind of rotten.

                          I guess I must also launch vim with no argument more than most people do :). vim -> ,e -> select file is a frequent workflow for me.

                        2. 4

                          Or why wouldn’t the author of vim start every vim session with instructions to help his favorite charity? Oh. Wait he’s done that for many years without any blowback. Every packager I’ve noticed chooses to leave it intact even though it would be trivial for a packager to change.

                          I think most maintainers would find removing an invitation to donate to a charity unethical, or do not want to deal with the backlash. Or maybe it does not bother them ;). It’s hard to say…

                          Neither your (1) nor your (2) only possible “equally bad outcomes” have happened.

                          I completely agree that the slope has been around for a while, I even mentioned another example. But I think it has an impact when more highly visible parties, such as Canonical, do such things. And the thing with slippery slopes is that things tend to speed up when the ball starts rolling. Look at subscriptions on the Mac, in the beginning a very small number of applications switched to subscriptions and there was a large public outcry. Now about every 1-2 weeks some Mac application model switches to a subscription model.

                          Not that I believe this would fly very far on FLOSS operating systems in general, because their maintainers will intervene when it gets out of hand.

                          1. 2

                            I do believe that most packagers would simply decline to package rather than remove a plug for such an un-controversial charity as ICCF or oxfam (the two most common free software charity plugs that spring to mind).

                            the thing with slippery slopes is that things tend to speed up when the ball starts rolling

                            Fair point!

                            And the thing with slippery slopes is that things tend to speed up when the ball starts rolling. Look at subscriptions on the Mac, in the beginning a very small number of applications switched to subscriptions and there was a large public outcry. Now about every 1-2 weeks some Mac application model switches to a subscription model.

                            Got a link to some examples? I’ve been mostly de-Mac’d for almost two years now (due to dissatisfaction with their laptop hardware options) and had not noticed that trend before I shifted over to Linux.

                            1. 1

                              Got a link to some examples? I’ve been mostly de-Mac’d for almost two years now (due to dissatisfaction with their laptop hardware options) and had not noticed that trend before I shifted over to Linux.

                              An incomplete list. I don’t think there is a definite list:

                              • Adobe (2012), they really started to get the ball rolling.
                              • 1Password (2016), though they still have a well-hidden standalone version
                              • Text Expander (2016)
                              • Ulysses (2017)
                              • Day One (2017)
                              • Git Tower (2018)
                              • Gemini (2018???)
                              • Capo (2018)
                              • Drafts (2018)
                              • Quicken (2018)
                              • Airmail (2019)
                              • BBEdit (2019, but only app store?)
                              • MindNode (2019)
                              • VirtualHostX Pro (2019)
                              • Enpass (2019)
                              • Pocket casts (2019)
                              • Fantastical (2020)

                              I primarily switched back to Linux because of (the power of) NixOS. But the other large factor was: ‘are the apps that I rely on going to switch to the subscription model next?’ Of these, I have used Adobe Lightroom (still no good replacement), 1Password, Airmail, and Fantastical at some point.

                            2. 1

                              Look at subscriptions on the Mac, in the beginning a very small number of applications switched to subscriptions and there was a large public outcry.

                              Is your issue with subscriptions that they lead to higher overall costs for consumers?

                              1. 2

                                The inversion of control.

                                With perpetual licensing, it was our choice as a consumer to decide if a new version is worth it and whether we agreed with the pricing. It also makes financial planning easier, you can buy that upgrade whenever it suits you financially.

                                With subscriptions the app developer is in control. If they have some amount of lock-in, they can raise prices whenever they want. They can choose to never do any meaningful updates, but you still continue to pay. You are not the owner of a license anymore, but completely at the mercy of the developer. Worst case, they go bankrupt and your software won’t work anymore.

                                For some types of software, I understand the motivation, if you have a program that does just one thing, there is not a lot of incentive for users to upgrade. But you still have to maintain the software, answer support requests, etc.

                                But it still sucks for the user.

                                1. 1

                                  As long as you (as I do) accept the fact that some software requires payment, the payment structure is a detail.

                                  Subscriptions are popular because they provide a better revenue stream for the developer.

                                  Having an “account” with a vendor has many benefits - no need to keep track of physical media and license keys, easier to integrate backups, etc.

                                  The issues with lack of updates, going out of business etc. are best left to the market anyway. Build a better product and/or provide it for less money, and a vendor will gain customers.

                                  1. 1

                                    It’s not just the payment structure that is a detail, and it’s no small detail. Why? Because software is not a free market. If an app holds your data and doesn’t provide a way to export it to a format readily importable into another app, then you are not free to just switch to an alternative.

                          2. 2

                            If this would really get of the ground

                            A lot of single-maintainer github repos that I’ve visited recently have links to the maintainer’s patreon account. I don’t believe we’re that far off from popular open source projects asking their audience to “comment, like, subscribe, follow us on Twitter and Instagram” with links to Patreon and so forth right in the main UI.

                            I think it’s only a matter of time before it’s not only common but half-expected that open source project maintainers approach their users with hat in hand. I’m not saying I like it as it smacks just a bit too much of entitlement to me, I’m just saying it feels like it’s inevitable.

                            Finally, why does the GNU project even permit this, since you are not allowed to impose additional restrictions on use/distribution beyond the GPL?

                            In this specific example, the authors are not limiting restrictions on use or distribution, they’re attempting to impose a demand regarding “acedemic tradition”. And like most demands, it’s completely unenforceable and in poor taste.

                            1. 2

                              And like most demands, it’s completely unenforceable and in poor taste.

                              Agreed that it’s unenforceable. But I wouldn’t be surprised if colleagues who are less familiar with FLOSS licensing [1] would think that it is a requirement for using GNU parallel.

                              [1] In our field, a lot of people do not use Linux for ethical reasons, but because the machine learning ecosystem is just the strongest on Linux.

                          1. 11

                            That’s weird. My XPS 13 has been extremely well behaved - suspends / resumes flawlessly, wifi & bluetooth just work. Touchpad is fine. etc etc.

                            It’s kind of refreshing to not have to work round some random piece of hardware that just doesn’t work for obscure reasons.

                            Maybe it’s the dual GPU XPS laptops that are particularly bad? Or have they simply got worse again since I bought mine?

                            1. 5

                              An XPS 13 was the inspiration for this rant. I wasted 5 hours on it before washing my hands of the damn thing. Integrated GPU system.

                              1. 9

                                Which model? I have an XPS 13 9360 and haven’t had any problems running stock Fedora on it. Also curious what the problem is, I’m currently looking to buy a second laptop. My pinebook pro isn’t really working quite well enough to be that yet, and the XPS 13 is currently the top contender.

                                1. 4

                                  yikes.. any more info on this? i have an 8550 with win10 and I’ll be moving to Linux soon, but now is a good time to switch it if I’m gonna have issues

                                  1. 2

                                    I have the 9360 and an older 9343. I’ve had an ubuntu release on each since I received ’em. I think the 9343 had some functionality challenges out of the box, but there were BIOS fixes available to be applied by the time I purchased it.

                                    The integrated GPU is underpowered for games but otherwise I haven’t encountered any issues with it or anything else with the laptop, really.

                                    1. 3

                                      Currently I’m running a 9360 & used a 9343 at work before that. Both were / are perfectly well behaved.

                                      (I should have bought a 16Gb model though - 8Gb is a bit tight for dev work in the modern world sadly. Compiling anything involving LLVM is an exercise in patience.)

                                      1. 3

                                        8Gb is a bit tight for dev work in the modern world sadly. Compiling anything involving LLVM is an exercise in patience.)

                                        Yeah, amen! My 9360 has 8GB and I use it for Chrome and gnome-terminal so I can remotely access a desktop with 32GB. I think I tried building clang+llvm once on the XPS, but never again.

                                        1. 4

                                          I think I tried building clang+llvm once on the XPS, but never again.

                                          It’s doable but you have to radially reduce the parallelism of the build, otherwise it eats all your memory & goes into swap hell. As a result it takes quite a while.

                                  2. 3

                                    How much of this was an issue with the hardware itself vs. trying to use linux on it?

                                    1. 5

                                      I have no idea. I only tried Linux and that should always be enough.

                                      1. 6

                                        Exactly, especially for a so called “developer” edition. You can’t say it’s a developer machine and then ignore about half the target audience. In the nineties/early naughts, I expected to be marginalized as a Linux user, but nowadays it’s the Windows developers that are often regarded as quaint (assuming web or mobile development; game or desktop development is a whole different ballpark of course).

                                  3. 5

                                    I only run Linux so I can’t vouch for BSD/Plan 9/Haiku compatibility but I’ve been using higher-end Dell laptops for the past couple of years (from the Latitude and now Precision lines) and have been mostly happy with them. My only real complaints revolve around they keyboards but nearly all laptop keyboards are awful in their own way.

                                    Some Lobsters have a seething hatred for all things Intel for whatever reason but when I spec a laptop, I look for one with an Intel CPU, Intel GPU (I do very little gaming), and Intel wifi because whatever their other faults, Intel is awesome at writing and maintaining Linux kernel drivers.

                                    I would like to check out some of the newer Thinkpad models but Lenovo’s website is such garbage that I can’t even tell which models they still sell these days.

                                    1. 4

                                      Intel is awesome at writing and maintaining Linux kernel drivers.

                                      It is very good, but not perfect. My dell precision was having random freezes for several due to a faulty intel GPU driver. This is a well-known bug in the driver that has been going for months, without an available solution yet. I had to switch on my nvidia graphics (which I had never used on my lab-issued laptop) because the intel GPU was unusable.

                                      1. 4

                                        I’ve solved a lot of flickering and other weirdness on my old Toshiba laptop by uninstalling the intel driver and letting the system fall back to a generic modesetting driver. Debian and others have made this the default since.

                                    2. 2

                                      That’s weird. My XPS 13 has been a bastard of a thing to deal with. Different USB-C ports seem to have different capabilities each time it resumes (or is that reboots?) and I never know if my external display is going to appear as DP-1 or DP-2. Admittedly, I have (mostly) working suspend/resume, don’t use bluetooth and never buy dual-GPU laptops to avoid that rats’ nest of trouble.

                                      1. 3

                                        The 9360 only has one USB-C port, so I didn’t have this problem :)

                                    1. 5

                                      How do you talk about privacy and digital safety with plain, regular, non-technical people who admit they have nothing to say, so they have nothing to hide?

                                      You keep using the word “talk” but you’re being very dishonest in doing so. I think it’s clear from everything else you said that you don’t want to talk to them, you want to debate with them. Probably with the ultimate goal of convincing them to come around to your viewpoint. You want them to care (at least a little) about something you obviously care deeply about. There is a big difference between this, and merely engaging in a conversation with someone.

                                      I think it’s great that you care about and advocate privacy, I care about privacy too and have been known to go to rather unusual lengths to protect mine. But the bottom line is that you can’t make other people care about the same things you do. In trying, you waste your time, you waste their time, and you stand a good chance of setting them against your cause if you annoy them sufficiently with your missionary tactics.

                                      It’s quite likely that the person you tried to engage with really doesn’t care about his own privacy. That’s a bit sad, but it’s just the way it is. People care about different things. There are a great many things that people have told me I should care about on a daily basis, and most of them are probably not wrong. But the reality of life is such that each of us has only a limited number of fucks to give, even though there is an infinite set of things in this world to give a fuck about.

                                      The best thing to do is drop it and let the other person go about his life. Often, bringing up the topic and then letting it go is literally the best and only thing you can do. I’m guessing that this is someone close to you (a family member or close friend) or it wouldn’t have bothered you this much. If this is someone you care about and want a good relationship with, be there for him but don’t try to make your societal concerns his societal concerns or one day you will look back and notice that the bridge between you two has been reduced to a smoldering ruin. You might not know why at that point, but the realization will eventually dawn. Ask me how I know.

                                      1. 3

                                        There’s nothing wrong with debating, but it is true that too much can be very alienating.

                                        “You can’t make everyone understand/agree” is a hard lesson to learn. I’m still learning it. But it is important to learn.

                                      1. 4

                                        I don’t work in the security field but I have been dabbling around the edges of security for decades. My take: I don’t run Windows or Mac but if I did, I would never use any anti-virus that isn’t built into the OS. Reasons why:

                                        1. Anti-virus products substantially lower the performance of machines they are installed on. Not only do you have regular disk scans, but the anti-virus product itself becomes a choke-point for all data flowing into, out of, and within the machine.

                                        2. Anti-virus products actually increase your attack surface because they themselves can (and do!) contain vulnerabilities that bad guys can exploit to take control of your machine. They’re a very juicy target for hackers because all data and network traffic flows through them and they often have hooks into the OS kernel, TLS certificate store, etc.

                                        3. The companies who make anti-virus products are often inept. A few years back a major laptop manufacturer shipped an anti-virus product with their own self-signed certificate in the key store. Unfortunately, they also shipped the private key.

                                        4. The companies who make anti-virus products are often evil. The old adage “if it’s free, you are the product” applies. Like you mentioned, Avast was discovered to be selling all of its users’ web browsing activity to marketing companies. It is extremely unlikely that they are the only ones doing it.

                                        If you follow good security practices, keep your shit patched and up to date, and stay away from the shady side of the Internet, you aren’t guaranteed never to get hit with something, but the chances are very, very slim.

                                        1. 1

                                          If you follow good security practices

                                          Well… if you know karate, you don’t need to defend yourself with a pepper spray. Does that mean pepper sprays are useless for those who don’t know karate?

                                          but the chances are very, very slim.

                                          From my POV it’s similar to guns and vaccines. If everyone is doing it, then 1 individual can think it’s pointless to use guns and take vaccines – because this individual lives in a society who uses guns and vaccines, so even if this individual doesn’t use them personally, he benefits from security/health conditions created by other people. So even if this person will stop using guns and vaccines altogether, most probaby nothing will happen. Now, if everyone would stop using guns or vaccines…

                                          1. 1

                                            I don’t think the equivalencies are apt here. There is also no guarantee of catching a bad thing with non-OS antiviruses. (I reply with understanding that the author wrote the quoted sentences assuming OS provided antivirus is running).

                                        1. 17

                                          Multi-head support on Linux has been terrible for a long time. What’s really aggravating is that for a while in the mid 2000’s, it was actually really great. Around the time that LCD monitors began to take hold, CRTs started getting downright cheap and any “serious” workstation had two or more big chunky CRTs sitting side by side. Not my battlestation, but here’s the evidence: https://www.linux-user.de/ausgabe/2001/12/044-dual/dual.jpg Eventually multiple 17” or 19” 4:3 CRTs replaced those.

                                          Both KDE and GNOME 2 handled multiple displays very well. Even better than Windows and Mac at the time. You could hot-plug monitors into your system and your desktop would magically expand to fill it. If you ran some applications on that second monitor and then disconnected it, the windows would automatically move back to the first. Reconnect the monitor again and the window moved right back to where it was. And all of this worked great when you added and removed displays even while the system was suspended.

                                          However, eventually a thing happened. Two things, really. 1) Widescreen LCD displays started getting cheap. (Why bother with the fuss of two monitors when you could get almost the same horizontal resolution in one?) 2) Users and developers flocked to portable laptops instead of big powerful desk-bound workstations. From my observations working in this field, most developers often work directly on their laptop with no external screen. A good percentage of those work in what i call “iPad mode” where each application they run is maximized full screen and instead of dealing with moving windows around, they just switch from app to app. Or, when they do plug into a screen, they close the laptop lid and just use the one screen.

                                          I feel that as a result of these cultural changes, multi-head Linux desktop configurations seems to faltered. My workflow involves spending most of my time in dual-head mode on my laptop: the laptop screen and an external monitor through a dock. When I need to go to a meeting, I undock the laptop and need to have the desktop do the right thing. And the same when I come back and dock it again.

                                          KDE ostensibly supports multi-head configurations but last I checked it was a little buggy and not as flexible as I’d like. XFCE’s implementation has been buggy and annoying for years, although they do keep trying to improve it. Right now GNOME is the only one that seems to get it completely right, or at least right enough for me. (Which is annoying because I don’t really like GNOME that much!)

                                          1. 4

                                            In the 10+ years I’ve been running multi-head Linux (usually two, sometimes three monitors (3rd is the laptop screen)) I’ve had no big issues with it. Definitely not the issues I see colleagues having with Windows or OS X, which generally are hard to debug: it either works or it doesn’t on those OSs.

                                            However, I run a niche distro (Void Linux, Debian before that) and do not use desktop environments: I’ve always been on i3, StumpWM or EXWM and use xrandr to configure my monitors.

                                            I do realize this lacks the easy of use you might be looking for, but it’s very Linux ;-)

                                            1. 1

                                              Same; I haven’t seen any multi-head issues since ~2005 on Debian.

                                              1. 1

                                                use xrandr to configure my monitors.

                                                If you ever want a graphical frontend to xrandr, I do suggest arandr. It’s packaged too :) and can emit shell scripts that reapply the current screen configuration, for automation purposes.

                                              2. 3

                                                Some anecdata, and an idea: I use Cinnamon and have not had problems with restoring external screen state in years, and I assume it is part-or-mostly because of utilities shared with, or borrowed from, GNOME.

                                                If you dislike GNOME (or Cinnamon for that matter), but this feature is very important to you, you might want to check other GNOME-adjacent projects such as MATE or Pantheon.

                                                1. 1

                                                  I used MATE on and off for many years after GNOME 2 was abandoned. I would like to keep using it but it seems like the developers have too much on their plate just to keep up with GTK and other dependencies constantly changing out from underneath them. As a result, it seems to be getting increasingly more broken for me, unfortunately. I wish I could contribute to the project somehow but desktop development is not in my wheelhouse.

                                                  I used Cinnamon for a while years ago but haven’t tried it lately. I’ll have to give it a fair shake again. Thanks for the suggestion. I always seem to forget about it.

                                                  Right now I’m seeing if I can acclimate to the Ubuntu (GNOME 3) desktop with some tweaks. So far it’s quite stable. The dash-to-panel extension and the arc menu extension make it pretty close to palatable (for me) from a UI standpoint, but we’ll see if that holds in the long term.

                                                  1. 1

                                                    I feel like I was in a similar situation for a long time. I used Fluxbox and then Openbox for years, did the tiling thing for a bit, but ultimately decided I wanted a more traditional DE. I ended up ping-ponging between them all, never quite satisfied.

                                                    Cinnamon passed the “works for me” threshold for me in 2016 as a function of several factors, but a lot of it seems to coincide with the Linux Mint team giving up on matching every Ubuntu release. They began basing on the LTS release and were thus able to focus their resources into squashing lower-tier bugs, adjusting experience-defining annoyances, improve UI response times, lower memory usage, etc. instead of forever chasing this-or-that compatibility.

                                                2. 1

                                                  I use KDE with three monitors right now, have for many years, and haven’t noticed any relevant bugs at all. What bugs bother you?

                                                  1. 1

                                                    Most widescreen displays don’t really have more horizontal resolution. The newest ones are (roughly) half-height UHD displays (3840 resolution across, but only 1200 pixels down)

                                                    https://www.amazon.com/Samsung-LC43J890DKNXZA-CHG90-Curved-Monitor/dp/B07CT1T7HH

                                                    A proper UHD (“4k”) screen is 3840 x 2160. So the non-ultrawide will have the same horizontal resolution as an ultrawide, but less vertical resolution!

                                                  1. 7

                                                    I used to work for a web hosting company that deployed dozens of new servers every week. These were basically built from components with middle-tier consumer-level hardware, nothing at all enterprise about them so the quality control at the manufacturer was very likely not excellent. All of the boxes we would provision for a given hosting plan had identical hardware. Same motherboard, memory, CPUs, disks, and so on.

                                                    But I got to see first-hand how theoretically identical hardware could perform differently. Basically we would plug a box into the network, turn it on, and let it PXE boot into the OS installer. The install process took around 20 minutes. I say “around” because we would do two hosts at once and most of the time the two hosts would finish within seconds of each other but occasionally one would finish a minute or two ahead or behind the other.

                                                    The details of the setup escape me now because it was more than a decade ago at this point, but I’m sure I was able to rule out the PXE server and network as being a contributor to the variation somehow. I always meant to pursue it further to satisfy my own curiosity but I ended up leaving the company for a much much worse one (I would find out later) around that time.

                                                    1. 1

                                                      You might find this interesting. There’s apparently a lot of causes to consider.

                                                      1. 2

                                                        Thank you, I do find that interesting and I will plan to read it over lunch.

                                                    1. 2

                                                      Pretty good writing and a fun story. Thanks!

                                                      1. 1

                                                        Thanks for reading!

                                                      1. 2

                                                        I had spell checking disabled on mine, but my current config file is descended from a very old version.

                                                        In Roundcube’s defense, they don’t hide any of this](https://github.com/roundcube/roundcubemail/blob/master/config/defaults.inc.php#L801) and stop just short of telling you how to host your own spell checker.

                                                        However, it bugs me a little that they would have this enabled by default when every web browser on the planet has spell checking built into it these days.

                                                        1. 5

                                                          Nice overview of FreeBSD. I ran it on both server and desktop for most of the early 2000’s. We used jails to host a bunch of websites and mail servers. Jails were not easy to implement at the time and came with a few caveats. (I would have much rather seen FreeBSD, with its significant UNIX heritage, lead the recent containerization movement.)

                                                          But in spite of how neat I think FreeBSD is, Linux just became easier to manage thanks to wider vendor and hardware support so it’s what I’ve been working with for at least the last 15 years. Linux then, and even moreso now, also has the advantage of an extremely wide community: it was almost impossible for you to run into a problem that no one else has found first and then blogged about it. With FreeBSD, the only real place to ask a question was on one of the mailing lists. While there were a lot of people willing to help, there were also a few grouches there with a distinct RTFM-and-don’t-bother-me attitude. Hopefully things are better these days.

                                                          FreeBSD sets the kernel and the base system apart from third party packages. This is unique to FreeBSD and none of the other BSDs do that. I have always loved this about FreeBSD, but unfortunately (in my humble opinion) this has been changing since 2016.

                                                          I haven’t been paying much attention, can anyone summarize exactly what this is all about? Back when I used it, you just had the base system (kernel, userland, and a few third-party deps like gcc) and ports. What’s been changing?

                                                          EDIT: I read some of the author’s other posts and like a lot of people in the BSD community, unfortunately, there’s more than a little anti-Linux sentiment. “Linux is fragmented, Torvalds is a jerk, etc.” This holier-than-thou attitude is one of the things that turned me away from the FreeBSD community. FreeBSD isn’t “better” than Linux. They just have different cultures and different strengths and weaknesses.

                                                          1. 1
                                                            • My email. Have been for over a decade now. Postfix, Dovecot, and Roundcube. It’s not as difficult or scary as everyone makes it out to be.
                                                            • Nextcloud but mainly for calendaring. File synchronization seems to get most of the attention when people talk about Nextcloud but honestly its calendar is second-to-none in terms of features, UI, and stability. It just fucking works and works well. Works brilliantly with Apple devices, Android, Thunderbird, GNOME 3 calendar, etc.
                                                            • My neglected blog
                                                            • My private wiki, which I consider my second brain. Literally all of my notes and thoughts get recorded there.
                                                            • My private git repositories and cgit web UI to them.
                                                            1. 10

                                                              Great post! With regard to VSCode on ARM have you seen this page? Can’t test it since I don’t currently have ARM hardware and am restraining myself from buying a raspi 4 despite not having time to play with it :)

                                                              1. 3

                                                                https://github.com/VSCodium/vscodium claims to have ARM builds as well

                                                                1. 1

                                                                  VSCodium is pretty great. We use it at work because it’s infosec blessed and certifiably doesn’t phone home.

                                                                2. 2

                                                                  I just installed this on my Pinebook Pro and it seems to work flawlessly. Thanks for the tip!

                                                                  1. 2

                                                                    I have not, I wasn’t aware of that. I’ll check it out, thanks!

                                                                    1. 1

                                                                      I’ve been watching the PB Pro rather closely myself. Its price point is low enough that I could use it as a ‘burner’ laptop if I needed to attend an event where I didn’t want to risk any of my actual important data or traveling to a country where I suspect they might scan my devices or inject malware.

                                                                      1. 2

                                                                        Your link worked like a charm! I’ll be updating the article accordingly.

                                                                  1. 2

                                                                    My 3 is currently sitting on my desk, I wanted to use it to have an easy always-on Linux box at hand, but working on it interactively is just too slow (might be the sd card or the usb stick, whatever).

                                                                    We have a ton at work where they are the typical prototype thing. Need something small with an ethernet port where you can write code/reuse code without doing embedded development? -> RasPi

                                                                    I might be repurposing it to run pihole soon.

                                                                    1. 3

                                                                      This has been my experience with them as well. I/O performance, even with the 3B+, has been very frustrating. I’m hoping the 4 solves a lot of that, but I’m holding off on buying one until a) the case situation improves; b) they fix the USB-C power connection to be standard-compliant; c) they fix the HDMI-out so that high resolutions don’t kill the wifi; and d) the NixOS people get a chance to catch up to it.

                                                                      1. 5

                                                                        I’ve been using Raspberry Pis for various things basically since they’ve been out and the one thing I have always done to make performance acceptable is to always use a USB drive for any kind of I/O. I only ever use the SD card for booting the OS. In the case of my backup server (which ran on a Pi 1 for a few years), even the OS root partition is on an external disk. SD cards were never designed to be general-purpose computing storage. They were designed for bulk reads and writes (for digital cameras, picture frames, and music players) and suck for everything else.

                                                                        In the case of the Pi 4, a) you can buy a good quality FLIRC aluminum case that effectively dissipates heat and it doesn’t cost much more than the official one b) yeah they dropped the ball on this one but cheap USB-C power supplies with enough power for the Pi 4 are not exactly rare c) this is only a problem at one specific resolution, it’s not clear there is a fix since cheap HDMI cables with poor shielding seem to be a major contributing factor.

                                                                        1. 1

                                                                          Thanks for the advice. Even with a Pi3B+ talking to a USB HDD, I was frustrated with performance. What did you use? Is flash-style mass storage much better?

                                                                          Re: HDMI cables: I thought it got bad at or above a certain resolution, but I might be wrong there.

                                                                          1. 2

                                                                            The 4 has USB3 ports which are substantially faster than the interface you get in the 3B+. Combine that with a USB solid-state drive and IO performance becomes acceptable.

                                                                    1. 4

                                                                      One with octopi and a camera as a 3D-printer print server. I have my “lab” in the attic so I can monitor print jobs from anywhere.

                                                                      1. 2

                                                                        I have a pi ‘clone’ that I use for the exact same purpose. It’s low power, and very capable of running octoprint & streaming 720p video for spying on prints from across the house.

                                                                        1. 2

                                                                          Which pi clone?

                                                                          1. 4

                                                                            Orange Pi Lite. I called it a clone mainly because it supports the same 40pin header as the rpi, but it is armhf (not aarch64) and it includes Wifi. Since it’s headless (I don’t need 3D accel) I can (and do) run the latest mainline kernel on Arch Linux ARM on it.

                                                                      1. 3

                                                                        I have several, all Model Bs:

                                                                        • One Pi 2 is my backup server, it is connected to an external drive via USB. It backs up the contents of my NAS/web server, my VPS, and gets backups pushed to it from various laptops and phones.

                                                                        • I have two Pi 3s in my living room, one for emulated videos games for my kids and one to run Kodi. I could probably figure out a way to combine them into one but haven’t bothered because these things are so dang cheap.

                                                                        • One Pi 4 runs a Minecraft server for my kids (and wife… and me)

                                                                        • Some time soon, I’m going to buy another Pi 4 soon to replace an HP Chromebox that I’ve had running as my NAS for a few years. It has better performance and lower power.

                                                                        • I can see my self replacing my firewall with a Pi 4 eventually, if (and only if) OPNSense (which is FreeBSD with PHP web GUI) runs okay on it.

                                                                        • I keep telling myself I’m going to buy a gaggle of these to run a Kubernetes cluster on but I’m having a hard time coming up with an application that would actually justify it.

                                                                        1. 3

                                                                          Can any lobsters using HTTPie explain what drew them away from curl or what about curl pushed them to HTTPie?

                                                                          1. 8

                                                                            I haven’t been using it for long but for me the nicest thing so far is being able to see the whole response: headers, body, and all of it syntax-highlighted by default. The command-line UI is a little nicer as well, more clear and intuitive.

                                                                            It will probably not replace my use of curl in scripts for automation, nor will it replace my use of wget to fetch files.

                                                                            Now if someone took this and built an insomnia-like HTTP client usable from a terminal window, then we’d really have something cool.

                                                                            1. 1

                                                                              I’m guessing you mean this Insomnia. Looks cool. Good example of an OSS product, too, given most features people would want are in free one.

                                                                            2. 4

                                                                              I use both depending on circumstance (more complex use cases are better suited for curl IMO), but the significantly simpler, shortened syntax for HTTPie as well as the pretty printing + colorization by default for JSON APIs is pretty nice.

                                                                              1. 3

                                                                                I wouldn’t say I’d been ‘pushed away’ from curl, I still use curl and wget regularly, but httpie’s simpler syntax for request data and automatic coloring and formatting of JSON responses makes it a great way to make quick API calls.

                                                                                1. 3

                                                                                  I like short :8080 for local host syntax.

                                                                                  1. 3

                                                                                    It’s all in how you like to work. Personally I enjoy having an interactive CLI with help and the like, and the ability to build complex queries piecemeal in the interactive environment.

                                                                                    1. 3

                                                                                      Sensible defaults and configurability.

                                                                                      1. 2

                                                                                        I need a command line HTTP client rarely enough that I never managed to learn curl command line flags. I always have to check the manual page, and it always takes me a while to find what I want there. I can do basic operations with HTTPie without thinking twice and the bits I need a refresher on — usually the syntaxes for specifying query parameters, form fields or JSON object fields — are super fast to locate in http --help.

                                                                                        1. 1

                                                                                          curl is the gold standard for displaying almost anything including tls and cert negotiation. i use bat mostly now though for coloured output and reasonable json support. https://github.com/astaxie/bat

                                                                                        1. 5

                                                                                          Is it just me or is that site actively anti-back-button?

                                                                                          1. 4

                                                                                            Not just you, there are three redirects for some reason. Which means you have to press the back button three times to get back to the “real” previous page. Unfortunately most browsers have never been very interested in quashing bad website behavior that breaks the browser’s own UI and I’ve never been able to figure out why.

                                                                                          1. 3

                                                                                            So my question now is, how much does this affect SHA-256 and friends? SHA-256 is orders of magnitude stronger than SHA-1, naturally, but is it enough orders of magnitude?

                                                                                            Also, it’s interesting to note that based on MD5 and SHA-1, the lifetime of a hash function in the wild seems to be about 10-15 years between “it becomes popular” and “it’s broken enough you really need to replace it”.

                                                                                            1. 8

                                                                                              […] the lifetime of a hash function in the wild seems to be about 10-15 years […]

                                                                                              That’s assuming that we’re not getting better at creating cryptographic primitives. While there are still any number of cryptanalysis techniques remaining to be discovered, at some point we will likely develop Actually Good hashes etc.

                                                                                              (Note also that even MD5 still doesn’t have a practical preimage attack.)

                                                                                              1. 3

                                                                                                It would stand to reason that we get as good at breaking cryptographic primitives as we get at creating them.

                                                                                                1. 1

                                                                                                  Why? Do you believe that all cryptographic primitives are breakable, and that it’s just a matter of figuring out in what way?

                                                                                                  1. 1

                                                                                                    I have no idea but that sounds like a GREAT theoretical math problem!

                                                                                                2. 2

                                                                                                  This seems likely, but we won’t know we’ve done it until 30-50 years after we do it.

                                                                                                3. 5

                                                                                                  In the response to the SHA1 attacks (the early, theoretical ones, not the practical ones) NIST started a competition, in part to improve research on hash function security.

                                                                                                  There were voices in the competition that it shouldn’t be finished, because during the research people figured out the SHA2 family is maybe better than they thought. Eventually those voices weren’t heard and the competition was finished with the standardization of SHA3, but in practice almost nobody is using SHA3. There’s also not really a reason to think SHA3 is inherently more secure than SHA2, it’s just a different approach. Theoretically it may be that SHA2 stays secure longer than its successors.

                                                                                                  There’s nothing even remotely concerning in terms of research attacking SHA2. If you want my personal opinion: I don’t think we’re going to see any practical attack on any modern hashing scheme within our lifetimes.

                                                                                                  Also the “10-15 years” timeframe - there is hardly any trend here. How many relevant hash functions did we have overall that got broken? It’s basically 2. (MD5/SHA1). Cryptography just doesn’t exist long enough for there to be a real trend.

                                                                                                  1. 5

                                                                                                    As any REAL SCIENTIST knows, two data points is all you need to draw a line on a graph and extrapolate! :D

                                                                                                    1. 1

                                                                                                      FWIW, weren’t md2 and md4 were both used in real world apps? (I think some of the old filesharing programs used them.) They were totally hosed long before md5.

                                                                                                      1. 1

                                                                                                        I considered those as “not really in widespread use” (also as in: cryptography wasn’t really a big thing back then).

                                                                                                        Surprising fact by the way: MD2 is more secure than MD5. I think there’s still no practical collision attack. (Doesn’t mean you should use it - an attack is probably just a dedicated scientist and some computing power away - but still counterindicating a trend.)

                                                                                                        1. 1

                                                                                                          I have a vague (possibly incorrect) recollection of hearing that RIAA members were using hash collisions to seed broken versions of mp3 files on early file sharing networks that used very insecure hashing which might have been md4 (iirc it was one where you could find collisions by hand on paper). Napster and its successors had pretty substantial user bases that I’d call widespread. :)

                                                                                                    2. 2

                                                                                                      The order of magnitude is a derivative of many years of cryptanalysis over the algorithm and the underlying construction. In this case (off the top of my head), this is mostly related to weaknesses to Merke-Damgard, which sha256 ony partially uses.

                                                                                                      1. 1

                                                                                                        How funny!

                                                                                                        What are your relevant estimates for the time periods?

                                                                                                        When was the SHA-256 adoption, again?

                                                                                                        1. 12

                                                                                                          Here’s a good reference for timelines: https://valerieaurora.org/hash.html

                                                                                                          1. 2

                                                                                                            That site is fantastic, thank you.

                                                                                                      1. 4

                                                                                                        While we’re on the topic, there are few things that make me as irrationally angry as Linux/Unix programs that write text files without a trailing newline. When you cat a file and your prompt does not end up on its very own line, either the file was written incorrectly or it’s not a text file. Full stop.

                                                                                                        There are two simple rules:

                                                                                                        1. A Unix text file is a file in which every line ends in a newline, even if there is only one line.
                                                                                                        2. If the last character in a file is not a newline, it is not a text file, regardless of the rest of its contents.

                                                                                                        The biggest offenders in recent memory are Docker and some of its tools (although I think they might have fixed these) and VSCode. VSCode, presumably targeting Windows as the primary platform, does not by default add a trailing newline to the files it creates, even on platforms where it should. That’s fine, I can accept that since there’s an option to change it. But when you enable that option, it displays the trailing newline, unlike every other text editor on Unixish platforms. I actually submitted a bug about it but they marked it as a dupe and continue to debate it in other issues.

                                                                                                        1. 2

                                                                                                          As someone who originally came from the Windows world, that was really hard to grasp for me. I was exasperated to learn that for example if I copy the exact sequence of characters that are in this textbox and put them just like that into a “text” file, it is not considered a text file in Unix. Similarly, I was very annoyed to learn that gedit simply adds the ‘\n’ character at the end of every file you save, even if you don’t want to. At uni, we had to implement a tiny textfile-based database backend, and I almost went insane debugging weird invaid rows that were popping up in my database files. After some time, I was able to track it down: It happened whenever I was testing something and manually changed a row in gedit and saved the file. Gedit then added the newline, so my database interpreted this as another data entry row.

                                                                                                          I think the trailing newline should always be displayed, and in my eyes, it should never be added automatically by the editor. It is highly misleading and dangerous not to show the actual sequence of characters that is stored in the text file. Such magic is not in the spirit of Unix.

                                                                                                          1. 3

                                                                                                            Unix was basically designed as a text processing system fist and an operating system second. So even though you can have a file with arbitrary bytes in it, the text file is a first-class citizen. And the only real rule is that in order to call it a text file, every line ends in a newline.

                                                                                                            Imagine it’s 40 years ago and you have written your master’s thesis as a series of text files in your Unix account at the university. It’s finally complete and you need to print it because email doesn’t really exist yet. In Unix, it’s one command:

                                                                                                            cat thesis/* > /dev/printer
                                                                                                            

                                                                                                            If the newline is missing from the last line of each file, then the last line of each file and the first line of the next will be printed on the same line. At best, it would make two paragraphs run together and appear as one. At worst, the total line length would exceed 80 characters (or whatever the width of the printer was) which would either mess up page alignment if the “extra” characters are shoved down a line, or the printer might even decide to save page alignment and drop the overage instead. All of these outcomes are bad one way or another.

                                                                                                            Another point to consider is that all Unix text processing utilities (sed, awk, grep, et al) assume that every line in a text file ends in a newline. They process files one line at a time and most don’t have any such concept of the last line in a file. They just keep processing until there is no more file left. If the trailing newline was optional, every one of these utilities would need their own special case to handle the ambiguity at the end of the file. (Which, these days, they might very well do but back when Unix was being developed, simple rules and assumptions like this kept the code small, fast, and manageable.)

                                                                                                            Fun aside: In C and Unix, showing text and files to the display is usually referred to as “printing” them. If you’ve written any C, you’ve almost certainly used the printf() function. This is because, as noted above, the most common display device in early Unix times was an actual paper printer.

                                                                                                            1. 3

                                                                                                              Unix has this convention because it simplifies.

                                                                                                              A line is a sequence of bytes terminated by the newline byte. A file is a sequence of lines.

                                                                                                              The trailing newline should not be shown because it’s not editable. It’s part of the file format, not the file content.

                                                                                                              There’s an argument to be made that editors should maintain the format they’re given, but also in Unix land what you want is almost always the trailing newline. In any case you can configure good editors to do this (in vim it’s the fixeol setting or something).

                                                                                                              I’m sorry you had whitespace issues in your db testing, but you could have found them quickly with diff. Visual inspection is generally bad for telling if two files differ because of homoglyphs, encoding, and whitespace issues.

                                                                                                            2. 1

                                                                                                              Ok, I agree with you 90%, but what about templates? If I put a trailing newline in a template, it’ll get pasted everywhere. Unless we can change the convention to not count trailing newlines as content, I think we have to accept them sometimes not being there.

                                                                                                              My PS1 is more or less "--\n\n \u ..", with a – to indicate where output actually stopped, and a couple newlines for myself. It kind of hurts sometimes how much whitespace matters.

                                                                                                              1. 2

                                                                                                                I’m not sure, what kind of templates are you talking about? Can you give an example?

                                                                                                                I’m tempted to assume that your multi-line shell prompt is compensating for deficiencies in your other tools but I’ll withhold judgement until I know more about your situation. :)

                                                                                                                1. 1

                                                                                                                  I’ve got a scripts.html Hugo template that I want to insert basically <script src=".."></script> without introducing a newline (otherwise, how far would I indent? or are we dumping blank lines?). I looked in the docs and I could probably change it to {{ .. | chomp }} and use trailing newlines in source, but unless I have to I’ll leave the last \n off. I like the primary convention to be templates being copied as-is, trailing newlines being second only to that.

                                                                                                                  With my “deficiency-compensating” prompt, I can tell if a script’s output (or file, w/ cat) has 2 trailing spaces just by looking, no selection. It’s not worse.

                                                                                                            1. 2

                                                                                                              Does anybody know why this was done? Was this extension planned (after Jan 1 had been announced as the EOL) or did something unexpected happen?

                                                                                                              1. 11

                                                                                                                I don’t think it’s really right to say this was extended at all; April is the release of the final version of 2.7, whose code-freeze was today (Jan 1 2020). There’s some information on this more detailed page. The “sunset” date advertised is when development ends, but it takes a few months for them to release, apparently.

                                                                                                                I found this pretty confusing too!

                                                                                                                1. 4

                                                                                                                  Nothing was done; the chosen title for this submission is just wrong.

                                                                                                                  1. 3

                                                                                                                    I’ve suggested the original title for this submission.

                                                                                                                  2. 2

                                                                                                                    They are probably timing the release to coincide with PyCon.