1. 4

    The example given is quite extreme. Overall, I think verbosity isn’t necessarily a bad thing (look at vim for example).

    Also, if you’re into comments - just give an example of the input and output of such a Regular Expression. It would do most of the job, and if the reader knows just a bit of regex, they could figure it out, either by themselves or by using tools like regexr and regex101.

    1. 1

      regex101

      +1 for regex101, their test functions and also storage for your tests, so you can embed that as comment

      1. 1

        That might be a bad idea if you want your code to last longer than Regex101. Probably okay if it’s purely supplemental to your actual comment, though.

    1. 1

      An authenticated, local attacker could modify the contents of the GRUB2 configuration file to execute arbitrary code that bypasses signature verification.

      If the attacker can do this, they can also overwrite the whole bootloader with something that bypasses signature verification. If you can do this, your system is already compromised.

      1. 2

        No they can’t. Or rather they can, but if Secure Boot is on the UEFI firmware will refuse to load the modified grub.efi image, so the system won’t boot.

        1. 1

          So, this vulnerability allows jailbreaking, but does not affect security against attacker without a root password?

          1. 3

            How about an attacker armed with a local root privilege escalation vulnerability aiming to achieve persistence?

            1. 1

              https://xkcd.com/1200/

              To do what? They already have root for plenty of persistence. I mean, yeah, they can penetrate deeper. They can also exploit a FS vulnerability or infect HDD or some other peripheral.

              But that’s just not economical. In most cases, it’s the data they are after. Either to exfiltrate then or to encrypt them.

              In other cases they are after some industrial controller. Do you seriously believe there to be anyone competent enough to catch the attack but stupid enough not to wipe the whole drive?

              The only thing I can imagine TEE being used for is device lockdown.

            2. 1

              Not sure what you mean by jailbreaking - can you clarify? We’re talking about a laptop/desktop/server context, not mobile. Secure Boot does not necessarily imply that the system is locked down like Apple devices are. See Restricted Boot.

              If the attacker cannot write to /boot, then they can’t exploit this vulnerability. If the attacker has physical access this probably doesn’t hold true, regardless of whether they have a root password. If the attacker is local and has a root password or a privilege escalation exploit then this also doesn’t hold true, and can be used to persistently infect the system at a much deeper level.

        1. 1

          Does this affect VMs? Or is this a pure hardware vuln?

          1. 1

            Read the advisory. It depends on where your VM’s grub.cfg is coming from, which probably depends on exactly what kind of VM you’re using and how it’s booted. If the VM cannot persistently alter its grub.cfg, then no, it isn’t affected.

          1. 11

            As Tavis says himself: https://twitter.com/taviso/status/1288244033710481408:

            Yes, there are reasonable non-security reasons you might want it, I’m only opposed to the security arguments.

            Reproducible builds do not add much from a security perspective because to validate them, you have to do the entire work yourself and trusts the inputs.

            They are however useful from a development, debugging, deployment and distribution perspective (as mentioned already several times in the comments) and he does not deny that.

            1. 4

              Reproducible builds do not add much from a security perspective because to validate them, you have to do the entire work yourself and trusts the inputs.

              Nope, you can have multiple builders within the community who reproduce the build and sign off on it being identical. There’s a level of trust between “trust the vendor and their infrastructure entirely” and “build everything yourself”, and it is precisely this level that I have seen promoted by the reproducible builds people. :-)

              1. 2

                F-Droid does this automatically. If upstream provides an APK, and F-Droid can exactly reproduce that APK, then F-Droid will distribute the one it built with the original’s signature applied in addition to F-Droid’s signature.

              2. 2

                And yet.

                Such builds don’t prevent your source code from being malicious. They do make it harder for a compromised toolchain to go undetected by random users. They also help users verify the source they see is the source that built.

                If you build the same artifact twice and get different results, you learn nothing. Build it twice and get the same result, you know the toolchain did the same things both times, and that’s comforting.

                1. 1

                  Reproducible builds do not add much from a security perspective because to validate them, you have to do the entire work yourself and trusts the inputs.

                  Which isn’t what they are writing though? Tavis claims the following:

                  Now if the vendor is compromised or becomes malicious, they can’t give the user any compromised binaries without also providing the source code. […] Regardless, even if we ignore these practicalities, the problem with this solution is that the vendor that was only trusted once still provides the source code for the system you’re using. They can still provide malicious source code to the builders for them to build and sign.

                  So this is largely from only one perspective, and that is proprietary vendors where the pristine source is only gotten from the the vendor publishing the binaries themselves. This hold for proprietary vendors, but doesn’t for Open-Source distributions as pointed out earlier in this comment section.

                  1. 11

                    Better than nothing perhaps, but the least secure of all 2fa methods (even in your link), as well as being cloneable/hijackable and vulnerable to “vendor social engineering”. Not to mention requires handing your phone number off to a company, to increase your targeting profile, to be added to txt spam lists, and/or sold to other companies so they can advertiser to (spam) you.

                    Hardware tokens, push-message-based, even totp, all are superior. Why even spend the dev cycles implementing something marginal like SMS-2fa, paying for txt messaging (and/or integrating with an sms vendor), when you can just do something better instead (and arguably more easily)?

                    1. 5

                      Not to mention requires handing your phone number off to a company, to increase your targeting profile, to be added to txt spam lists, and/or sold to other companies so they can advertiser to (spam) you.

                      It’s also a pain in areas with poor or intermittent mobile coverage.

                      1. 1

                        The criticism in the article seems to be mostly around phishing attacks. Are these other approaches more resilient to phishing? With the suggestion of randomized passwords as the best alternative, the author seems to be against any kind of 2FA.

                        1. 5

                          Are these other approaches more resilient to phishing? With the suggestion of randomized passwords as the best alternative, the author seems to be against any kind of 2FA.

                          U2F and WebAuthn categorically prevent phishing by binding the domain into the hardware device.challenge response.

                          1. 5

                            The author also states:

                            If you also want to eliminate phishing, you have two excellent options. You can either educate your users on how to use a password manager, or deploy U2F, FIDO2, WebAuthn, etc. This can be done with hardware tokens or a smartphone.

                            So I don’t think the author is against 2FA in general, just specifically SMS-2FA.

                            Also note the first suggestion of using a password manager is, in my opinion, a bit nuanced, because “how to use a password manager” includes having the manager fill in credentials for you, and the password manager restricting this to only on the correct domain defined for the password.

                            Are these other approaches more resilient to phishing?

                            I would say U2F, FIDO2, WebAuthn is far more resilient to phishing, yes.

                            “A good password manager”? As I mentioned above I feel this one is more tenuous. I personally feel users could easily be tricked to copy/pasting credentials out of a password manager, since users have the expectation that software in general is kind of clunky and broken so “it must not be working right so I’ll do it manually”. As such, I’m not sure I necessarily agree that just using a good password manager is sufficient to prevent phishing. It would be interesting to see stats on it though, as my hunch is just that and has no scientific basis or real evidence behind it.

                            TOTP as a 2nd factor is presumably just as vulnerable to phishing as a password alone, but being an extra step and relatively out of band from normal credential flow, but for preventing automated (non-phishing) attacks, seems useful. In my opinion better than SMS-2FA, but nowhere near as good as U2F, FIDO2, WebAuthn.

                            push-message-based tokens (like Okta uses for example) are, presumably (caveat I’m not a security professional) as secure as the weakest link of vendors involved: push-vendor (eg. google, apple) and token vendor (eg. okta). Generally requires server side integration/credentials to get the vendor to invoke the push, and are typically device locked.

                            1. 2

                              “A good password manager”? As I mentioned above I feel this one is more tenuous. I personally feel users could easily be tricked to copy/pasting credentials out of a password manager, since users have the expectation that software in general is kind of clunky and broken so “it must not be working right so I’ll do it manually”.

                              I can’t count the number of times I have copy/pasted a password because the Firefox password manager saved the credentials for one login form on the site, but then didn’t autofill them on a different form. Maybe that means that it doesn’t count as a “good password manager” though? I guess I should be filing bugs on these cases anyway.

                              1. 2

                                Same. I also have a few sites that don’t even work well with 1password (generally considered pretty decent). Some sites also seem to go out of their way to make password managers not work. Why?! ;_;

                        2. 3

                          Good link!

                          I posted this because I think it’s interesting to see articulated arguments for a position I’m surprised by.

                          1. 6

                            Google wants to know our phone numbers. From that research, we can see that a phone number is effective in deterring some attacks. The question I would ask is, can we achieve similar security through other means? For example, even Google shows that On-device prompts or security tokens are better than SMS.

                            So please, if you think you must, offer SMS. But also offer other 2FA options and especially don’t force collect phone numbers if you can avoid it.

                        1. 5

                          I don’t want to believe this, as my gut feel is that reproducible builds are always a net good, but I don’t see a hole in the argument. Maybe the “on trusting trust” compiler backdoor?

                          1. 8

                            I’ve never heard of reproducible builds being advocated for in a proprietary context. That does legitimately seem like a flawed argument to me.

                            But it has a wider benefit in an open source context than the author says. If your goal is “make sure I absolutely have a trusted binary” then it doesn’t help, just as the author says. But if your goal is, “make it less likely that I’ve been given a malicious binary” - or in other words, “don’t make this binary fully trusted, but make it more trusted” - then it helps.

                            Why? For the same reason that using freely available source code helps. You trust that there are independent experts reviewing the code for flaws, and that if any are found they’ll be fixed and if upstream won’t fix them there’ll be a huge stink about how Foobar Project refused to fix a security vulnerability, and you’ll read about it on e.g. Lobsters. Likewise, if the build is reproducible you trust that independent experts are trying to reproduce the binary and are going to sound the alarm if they can’t. And since that hasn’t happened, you have greater trust in the binary. You can’t trust it completely, but you can trust it more. (Of course, whether anyone is actually performing this verification independently is another matter, and there are plenty of examples in FOSS where this idea has broken down in practice. But that’s a separate matter.)

                            I also don’t really buy the argument about bugdoors. It makes a lot of sense, but it’s risky for the attacker. Not in the sense that they might get caught, but in the sense that if their goal is to have a persistent backdoor, it might get fixed! You can claim it was a mistake, but your backdoor is gone either way. It’s not as reliable in the long term as distributing a malicious, tampered-with binary, but with reproducible builds the attacker is forced to not use the binary option anymore.

                            (There are more problems, of course. For example, if the attacker only wants to target a few users among thousands, and they control the update server and signing keys, then they can make that attack undetectable by serving the legitimate binary to everyone who’s not targeted, including independent verifiers. But that’s not what the article was saying. Plus, note that even here you’ve already raised the bar to “control the update server” which is a much more specific requirement than “control some part of the build pipeline”, and even this problem can be fixed - hopefully - with something like binary transparency.)

                            1. 5

                              I think the main thing is that reproducible builds aren’t just about security value. Having a deterministic system is valuable in general because it means that you’re looking at y = f(x) instead of y = f(x, some_random_unknown_garbage).

                              Like this focus on the distribution problem is only part of the problem, and I’ve always heard about reproducible builds in the context of stability, much more so than security.

                              1. 2

                                I’d be curious about any responses to this reply. I don’t see how it applies to open source reproducibility work, and I also think there are other motivations for reproducibility, which Google already has (and has had for a long time):

                                https://lobste.rs/s/ha8c42/you_don_t_need_reproducible_builds#c_w2aove

                              1. 4

                                As a matter of medical ethics, I’m not convinced that creating and marketing this drug in particular, or in general any drug that has depression and suicidal tendencies as side effects, is necessarily the wrong thing to do. Lots of useful drugs have serious side effects, and while it’s important for both doctors and patients to be aware of those side effects when making decisions about whether to use the drug, I don’t think the existence of those side effects implies that no one should use the drug. That’s a complicated medical question that depends on how likely those side effects are, exactly how bad they are, and what aliments the drug purports to treat, how effectively it does so, and how bad the untreated effects of that ailment are (additionally I wouldn’t assume that because one person I heard about in the news committing suicide while taking this drug, that implied that the drug was specifically responsible for that suicide, or that even if the drug was responsible for the suicide, that suicide necessarily outweighed the aggregate benefits of using the drug to treat an ailment). In any case, it’s certainly not a question that programmers, as opposed to doctors and medical regulators, have any special insight about.

                                Part of the reason the societies we live in have things like medical ethics laws, governmental regulatory organizations like the FDA, drug regulations, and so on, is because the ethical questions about when it is and is not okay to market and sell a drug are complicated and require medical domain-specific knowledge as well a shared conception of the common good to answer. It’s a very reasonable position to follow the letter of the law unless and until some knowledgeable medical authority, or the evidence of my own or my community’s experience with the drug, convinced me that the regulatory system around this specific drug ought to be changed. And I feel like, say, a doctor writing a blog, might convince me that more people would die without this drug than with it, just as easily as they might convince me that the suicidal thoughts side-effect was too serious and the medical regulatory establishment erred in letting this drug be sold at all.

                                Given that I’m not convinced the drug company actually acted unethically, or that the laws that permitted them to sell and market this drug should be changed, then why should I expect a programmer to refuse to write code on behalf of such a drug company?

                                1. 34

                                  not convinced the drug company actually acted unethically,

                                  Presenting a neutral-appearing “find the best drug for you!” informational website and then always giving the same answer with a fake quiz doesn’t sound like a bald-faced lie to you?

                                  The issue here isn’t that companies are marketing their drugs; it’s that they’re being deceitful lying twats about it. You seem to have completely missed the point. As you mentioned drugs and side-effects is complicated and hard, which is why you shouldn’t create fake “informational” websites with fake quizzes to market your drugs.

                                  1. 13

                                    Agree completely with your point, I think it’s quite clearly unethical behaviour.

                                    The issue here isn’t that companies are marketing their drugs

                                    It could be one of them. It strikes me as an incredibly odd thing when I see American TV and am not only bombarded with drug ads, they are targeted at the patients rather than medical professionals.

                                    1. 1

                                      I’ve been thinking about this too. It’s also super weird to me that there’s the whole list of side effects at the end… shouldn’t my doctor be telling me that and not the ad? What is I didn’t see the ad? I’m sure there’s some baroque liability reason they have to do it, but it’s dumb. (American here, FWIW.)

                                  2. 5

                                    I haven’t worked on code that’s specifically related to drugs and medication, but I have worked on medical devices so I guess my opinion is… partly informed? :). I don’t have experience bringing drugs to market but I’m somewhat familiar with the regulatory system involved.

                                    There is no question as to the fact that drugs that have certain side effects should be allowed. Practically all of them have side-effects. Even “natural” medicine, like various plants and whatever, have side-effects and can interact with other drugs, “natural” or not. That’s why the side effects are listed on the fine print – so that physicians and patients can make an informed decision about these things, and so that reactions can be properly supervised. How efficient the fine prints are is another debate, but I think we can safely argue that the benefit of some substances can outweigh the risk of side-effects, as long as administration is properly supervised and so on. For example, if a drug can cause depression and anxiety, a doctor can recommend close monitoring of a patient’s mental state, by another doctor or even a psychiatrist if necessary, especially if they lack a support system (if they live alone, secluded etc.) or have a history of depression. Or they may avoid that drug altogether if possible.

                                    However, uninformed self-medication is also a thing. That’s part of the reason why some drugs are only issued based on prescriptions, and why you’re supposed to keep some of them out of children’s reach and so on. It’s a very real problem, especially when it’s related to drugs for affections that carry some form of social stigma (mental illnesses, STDs), or for particularly difficult age ranges, where people have difficulties seeking help. Depressed teenagers, for example, are not very likely to go to adults for help, especially if their depression is fueled by adults in their life, like abusive parents.

                                    “Proving that you have a prescription” over the Internet was pretty easy to do twenty years ago (and I think it still is in some cases). You can often use someone else’s. A teenage girl can generally use her mother’s prescriptions pretty easily, for example.

                                    Now, of course, there’s only so much you can do to prevent self-medication by uninformed people. At the end of the day, if people think it’s a good idea, they’ll get their stuff one way or another. You can’t keep all drugs under lock and key in a safe and not hand them out unless someone brings in their doctor and three independent witnesses to confirm that they need the drug and that prescription they have is real. You print out (or the FDA makes you print out) big warnings, the drug can only be sold to prescription holders under specific conditions etc.. There a point past which you can’t do much to prevent self-medication.

                                    But acting in a manner that encourages self-medication – deliberately eschewing a physician’s ability to supervise medication and the patient’s evolution – is absolutely unethical. It’s akin to going into a hospital, leaving a bunch of pills on the table, and telling people to help themselves if they want, as long as they don’t tell their doctors about it. Doing so against a vulnerable age group makes it worse, too. Especially if the targeting deliberately exploits a prevalent vulnerability (edit: someone here mentioned Accutane – that was my guess, too, but I was hesitant to call it out, since the original author didn’t, and I don’t know that much about what was allowed in Canada twenty years ago. Accutane was a drug that was meant to help with acne. Yep.)

                                    1. 4

                                      Based on timing, target audience, and the issues it caused, it sounds an awful lot like Accutane. That’s a since-discontinued drug for treating acne.

                                      That aside, I think I would expect myself to refuse to write the code in question because:

                                      Remember, this website was posing as a general information site. It was not clearly an advertisement for any particular drug.

                                      and

                                      “Well, it seems that no matter what I do, the quiz recommends the client’s drug as the best possible treatment. The only exception is if I say I’m allergic. Or if I say I am already taking it.”

                                      and

                                      “Yes. That’s what the requirements say to do. Everything leads to the client’s drug.”

                                      Quite apart from any ethical judgement about what suicide frequency is acceptable as a side-effect for an acne drug, I’d expect myself to refuse to write code whose purpose is to deceive the public into making a particular health decision for the benefit of my client.

                                      1. 5

                                        I’d expect myself to refuse to write code whose purpose is to deceive the public into making a particular health decision for the benefit of my client.

                                        I would too. But I also remember how I was a lot more naive when I started coding, and I would not be surprised if I wouldn’t have picked up on this, just like the author. You expect that, if this was not okay, someone else would have stepped in already. It comes as a shock when you find out that “someone else” should have been you.

                                        1. 3

                                          Yes. I should have said “I would now expect myself…”

                                          I can’t claim with any certainty that I would have caught on 20 years ago.

                                        2. 2

                                          Somewhat off-topic, but Accutane isn’t discontinued, although according to Wikipedia the original manufacturer no longer produces it - was that what you meant?

                                          It is highly controlled in (at least) the US though. You have to get monthly blood tests to make sure it isn’t killing your liver, and if you can get pregnant you have to be on (IIRC) at least two forms of birth control. The latter is why it’s so controlled - it causes really severe birth defects.

                                          1. 1

                                            I didn’t realize anyone had picked up the manufacturing. Wow. I remembered it as having gone away.

                                            1. 1

                                              Yeah. I took it in 2015, which is how I know. I switched away from Accutane halfway through though because it was cheaper to go with a generic version, which was marketed as something else but had the same underlying active ingredient (isotretinoin) - maybe that’s what you’re thinking of? When I was reading about it last night Wikipedia said the original manufacturer shut down production because cheaper generic versions had become available (and because of lawsuit settlements over side effects…), so it’s unclear to me as to whether in 2020 it’s still actually available under the brand name “Accutane”.

                                      1. 6

                                        Deduplication (heh my phone wants to correct to “reduplication”??) in ZFS is kind of a mis-feature that makes it easy to destroy the performance. (I’ve had some painful experiences with it on a small mail server…) Pretty much everyone recommends not enabling it ever. So indeed it’s not a realistic concern, but it is fun to think about.

                                        It shouldn’t be that hard to add a setting to ZFS that would only show logicalused to untrusted users, not used.

                                        1. 9

                                          For folks not familiar with ZFS, just want to expand on what @myfreeweb said: “pretty much everyone” even includes the ZFS folks themselves. The feature has a bunch of warnings all over it about how you really need to be sure you need deduplication, and really you probably don’t need it, and by the way you can’t disable it later so you better well be damn sure. btrfs’ implementation, though, does not AFAIK suffer from the performance problems ZFS’ does because btrfs is willing to rewrite existing data pretty extensively, whereas ZFS is not because this operation (“Block Pointer Rewrite”) would among other problems break a bunch of the really smart algorithms they can use to make stuff like snapshot deletion fast. A btrfs filesystem after offline deduplication is not fundamentally different from the same filesystem before. ZFS deduplication fundamentally changes the filesystem because it adds a layer of indirection.

                                          logicalused seems like a good idea. It doesn’t fix the timing side channel, though. I think you’d want to keep a rolling average of how long recent I/O requests took to service, plus the standard deviation. Then pick a value from that range somehow (someone better at statistics than me could tell you exactly how) and don’t return from the syscall for that amount of time. Losing the performance gain from a userspace perspective is unavoidable since that’s the whole point, but you can use that time (and more importantly, I/O bus bandwidth) to service other requests to the disk.

                                          (Side note: my phone also wanted to correct to “reduplication”. Hilarious. Someone should stick that “feature” in a filesystem based on bogosort or something.)

                                          1. 2

                                            It shouldn’t be that hard to add a setting to ZFS that would only show logicalused to untrusted users, not used.

                                            I think that’s harder than you think. The df(1) command will show you free space, I’m not sure you can set a quota that hides whether a file was deduplicated. Also a user can use zpool(8) to see how much space is used in total.

                                            However, I hardly think this is going to be a problem with ZFS, because, as you say, “Pretty much everyone recommends not enabling it ever”. I have never experienced a use case where deduplication in ZFS would be advantageous for me, on the contrary; ZFS gets slower because it has to look up every write in a deduplication table, and it uses more space because it has to keep a deduplication table. If you enable deduplication on ZFS without thorough research, you will be punished for it with poor performance long before security becomes an issue.

                                            1. 2

                                              I mean report logicalused to everywhere like df, hide the zpools..

                                              The pools would already be hidden if it’s e.g. a FreeBSD jail with a dataset assigned to it.

                                          1. 6

                                            It is simple (and cheap) to run your own mail server, they even sell them pre baked these days as the author wrote.

                                            What is hard and requires time is server administration (security, backups, availability, …) and $vendor black-holing your emails because it’s Friday… That’s not so hard that I’d let someone else read my emails, but YMMV. :)

                                            1. 8

                                              not so hard that I’d let someone else read my emails

                                              Only if your correspondants also host their own mail. Realistically, nearly all of them use gmail, so G gets to read all your email.

                                              1. 4

                                                I have remarkably few contacts on GMail, so G does not get to read all my email, but you’re going to say that I’m a drop in the ocean. So be it.

                                                1. 4

                                                  you’re going to say that I’m a drop in the ocean. So be it.

                                                  I don’t know what gave you that impression. I also host my own email. Most of my contacts use gmail. Some don’t. I just don’t think you can assume that anyone isn’t reading your email unless you use pgp or similar.

                                                  1. 1

                                                    Hopefully Autocrypt adoption will help.

                                                    1. 2

                                                      This is the first time I’m hearing of Autocrypt. It looks like just a wrapper around PGP encrypted email?

                                                      1. 1

                                                        This is a practice described by a standard, that help widspread use of PGP : by flowing the keys all all around.

                                                        What if every cleartext email you received did already have a public PGP key attached to it, and that the mail client of everyone was having its own key, and did like so: sending the keys on every new cleartext mail?

                                                        Then you could answer to anyone with a PGP-encrypted message, and write new messages to everyone encrypted? That would bring a first level where every communication is encrypted with some not-so-string model where you exchanged your keys by whispering out every byte of the public key in base64 to someone’s ear alone in alaska, but as a first step, you brought many more people to use PGP.

                                                        I think that is the spirit, more info on https://autocrypt.org/ and https://www.invidio.us/watch?v=Jvznib8XJZ8

                                                        1. 2

                                                          Unless I misunderstand, this still doesn’t encrypt subject lines or recipient addresses.

                                                          1. 1

                                                            Like you said. There is an ongoing discussion for fixing it for all PGP at once, including Autocrypt as a side effect, but this is a different concern.

                                                2. 1

                                                  Google gets to read those emails, but doesn’t get to read things like password reset emails or account reminders. Google therefore doesn’t know which email addresses I’ve used to give to different services.

                                                3. 4

                                                  Maybe I’m just out of practice, but last time I set up email (last year, postfix and dovecot) the “$vendor black-holing your emails” problem was the whole problem. There were some hard-to-diagnose problems with DKIM, SPF, and other “it’s not your email, it’s your DNS” issues that I could only resolve by sending emails and seeing if they got delivered, and even with those resolved emails that got delivered would often end up in spam folders because people black-holed my TLD, which I couldn’t do anything about. As far as I’m concerned, email has been effectively embraced, extended, and extinguished by the big providers.

                                                  1. 4

                                                    This was my experience when I set up and ran my own email server: everything worked perfectly end to end, success reports at each step … until it came time to the core requirement of “seeing my email in someone’s inbox”. Spam folder. 100% of the time. Sometimes I could convince gmail to allow me by getting in their contact/favorite list, sometimes not.

                                                    1. 1

                                                      I wonder how much this is a domain reputation problem. I’ve hosted my own email for well over a decade and not encountered this at all, but the domain that I use predates gmail and has been sending non-spam email for all that time. Hopefully Google and friends are already trained that it’s a reputable one. I’ve registered a different domain for my mother to use more recently (8 or so years ago) and that she emails a lot of far less technical people than most of my email contacts and has also not reported a problem, but maybe the reputation is shared between the IP and the domain. I do have DKIM set up but I did that fairly recently.

                                                      It also probably matters that I’ve received email from gmail, yahoo, hotmail, and so on before I’ve sent any. If a new domain appears and sends an email to a mail server, that’s suspicious. If a new domain appears and replies to emails, that’s less suspicious.

                                                      1. 2

                                                        Very possible. In my case I’d migrated a domain from a multi-year G-Suite deployment to a self-hosted solution with a clean IP per DNSBLs, SenderScore, Talos, and a handful of others I’ve forgotten about. Heck, I even tried to set up the DNS pieces a month in advance – PTR/MX, add to SPF, etc. – in the off chance some age penalty was happening.

                                                        I’m sure it’s doable, because people absolutely do it. But at the end of the day the people I cared about emailing got their email through a spiteful oracle that told me everything worked properly while shredding my message. It just wasn’t worth the battle.

                                                  2. 3

                                                    That’s not so hard that I’d let someone else read my emails

                                                    Other than your ISP and anyone they peer with?

                                                    1. 2

                                                      I have no idea how bad this is to be honest, but s2s communications between/with major email providers are encrypted these days, right? Yet, if we can’t trust the channel, we can decide to encrypt our communication too, but that’s leading to other issues unrelated to self-hosting.

                                                      Self-hosting stories with titles like “NSA proof your emails” are probably a little over sold 😏, but I like to think that [not being a US citizen] I gain some privacy by hosting those things in the EU. At least, I’m not feeding the giant ad machine, and just that feels nice.

                                                      1. 7

                                                        I’m a big ‘self-hosting zealot’ so it pains me to say this…

                                                        But S2S encryption on mail is opportunistic and unverified.

                                                        What I mean by that is: even if you configure your MTA to use TLS and prefer it; it really needs to be able to fall back to plaintext given the sheer volume of providers who will both: be unable to recieve and unable to send encrypted mails, as their MTA is not configured to do encryption.

                                                        It is also true that no MTA I know of will actually verify the TLS CN field or verify a CA chain of a remote server..

                                                        So, the parent is right, it’s trivially easy to MITM email.

                                                        1. 3

                                                          So, the parent is right, it’s trivially easy to MITM email.

                                                          That is true, but opportunistic and unverified encryption did defeat passive global adversaries or a passive MITM. These days you have to become active as an attacker in order to read mail, which is harder to do on a massive scale without leaving traces than staying passive. I think there is some value in this post-Snowden situation.

                                                          1. 1

                                                            What I’ve done in the past is force TLS on all the major providers. That way lots of my email can’t be downgraded, even if the long tail can be. MTA-STS is a thing now though, so hopefully deploying that can help too. (I haven’t actually done that yet so I don’t actually know how hard it is. I know the Postfix author said implementation would be hard though.)

                                                      2. 1

                                                        I get maybe 3-4 important emails a year (ignoring work). The rest is marketing garbage, shipping updates, or other fluff. So while I like the idea of self hosting email, I have exactly zero reason to. Until it’s as simple as signing up for gmail, as cheap as $0, and requires zero server administration time to assure world class deliverability, I will continue to use gmail. And that’s perfectly fine.

                                                        1. 7

                                                          Yeah, I don’t want self-hosted email to be the hill I die on. The stress/time/energy of maintaining a server can be directed towards more important things, IMO

                                                      1. 2

                                                        That’s why I’ve always been slightly bothered when people call the return key “enter”.

                                                        1. 9

                                                          Windows keyboards tend to have an Enter key but no Return key, Mac keyboards tend to have a Return key but no Enter key. They’re in the same place and at first glance do exactly the same thing so it’s easy to understand why people would say that.

                                                          1. 4

                                                            Even portable Macs used to have an enter key, where the right option key is now situated. I really missed it when it disappeared, only to discover fn+enter quite recently.

                                                            1. 3

                                                              Did you mean Fn+Return for Enter?

                                                              1. 3

                                                                Haha, of course.

                                                            2. 2

                                                              Well, non-Apple keyboards tend to label it “enter” but it sends the keycode that means “return”..

                                                              1. 1

                                                                That sounds like a recent development. Almost all non-Apple keyboards I’ve seen have labeled the key with the carriage return symbol, not the text “Enter”.

                                                                1. 2

                                                                  I think the symbol is more common on ISO layout keyboards. The ANSI Thinkpad and Pixelbook I have both say “Enter”.

                                                            3. 1

                                                              Yes, this is one of my favourite pieces of trivia to bring up when I’m in the mood for teasing someone.

                                                              I’m also consistent in calling it Return, and most people seem to not even notice. (Unless I want to tease them by correcting them when they say Enter.)

                                                            1. 6

                                                              GnuTLS has had 20 high severity CVEs in the last 3 years.
                                                              OpenSSL. by comparison, has had 2 high severity CVEs, 8 medium severity CVEs, and 13 low severity CVEs.

                                                              1. 3

                                                                If security is the concern, then libressl should be mentioned.

                                                                1. 4

                                                                  Where is TLS used when security isn’t a concern? I really wish everyone would just switch from OpenSSL to LibreSSL. Both in downstream projects and in where people choose to send funding.

                                                              1. 11

                                                                I am genuinely surprised that PyPI allows package deletion. I thought after left-pad, everyone just kinda said “yeah, we shouldn’t allow that anymore” and disabled package deletion. npm certainly did. Rust/Cargo’s crates.io has similar policies. This is an accident waiting to happen.

                                                                1. 7

                                                                  I also assumed it wasn’t possible, but after one package I was using disappeared, I had to go verify and in fact the option to delete the package is there. A big red alert is shown mentioning that other people will be able use that package name after deletion, but the owner can proceed with the deletion if he really wants to.

                                                                  This is an accident waiting to happen.

                                                                  I share the same opinion.

                                                                1. 7

                                                                  I have become increasingly convinced recently that Moxie was right, or at least might have been right.

                                                                  All the supposed benefits of federated systems - censorship resistance, availability, privacy - apply to the network as a whole. But if you focus on individual users, those properties get worse, not better. For example: XMPP. Unlike Signal, XMPP cannot be “shut down”, because you would have to shut down an ever-expanding list of federated servers. Same with surveiling the XMPP network. But if your goal is to compromise an individual user, that’s much easier than it is under Signal. Which security team is going to have a better chance defeating that attack: Signal’s security team? Or the operator of whatever random XMPP node the user is using, or possibly even the targeted user themselves (if they self-host)? (Same goes for availability: if your node goes down, the rest of the network stays up but that doesn’t matter to you because your node is down. Is the availability of the average node really going to be better than that of a hosted service with a whole team and on-call rotation dedicated to keeping it up?)

                                                                  End-to-end encryption (like OMEMO) solves some of these problems, but is unable to make deep improvements in the same way Signal can. For example, under OMEMO all message routing information (i.e. To and From fields) is still sent in the clear, because it has to be in order to be interoperable on the XMPP network. Meanwhile, Signal was able to unilaterally deploy Sealed Sender and completely eliminate vast quantities of this metadata.

                                                                  When we* standardized ActivityPub, we got an issue requesting end-to-end crypto to be part of the spec. I’m too lazy to find the minutes but at least IIRC, our answer was basically “the underlying concerns there are mostly addressed by the properties of federation, and if it isn’t you can spec out a protocol to do crypto over the AP core protocol anyway.” It didn’t help that this request came relatively late in the game, so it didn’t make it into the core of the spec. I thought this was a good decision at the time. Certainly from a purely technical standpoint I stand by our decision - there was no reason E2EE couldn’t be an extension (like OMEMO). From a broader perspective, though, I wonder if we made targeting individual users that much easier.

                                                                  I’ll also say that quite frankly writing the above was quite painful - I like ActivityPub a lot, I still want to believe in a lot of what it enables (for example, from a software freedom perspective it’s an incredible improvement) and the fediverse is a really cool set of communities. I personally know a lot of good people that have and are working on it, people who are friends and whose hard work I don’t want to dismiss or devalue. I want to still believe - someone please tell me I’m wrong.

                                                                  Everything is a tradeoff. But because of network effects, in some cases we have to make tradeoffs for everybody. And that means there will always be some group that the chosen tradeoffs don’t really fit. (Edited to clarify a few sentences.)

                                                                  *: I was in the Working Group that standardized ActivityPub and a number of related specifications, though I joined relatively late. That being said, I haven’t been involved in those communities in any serious way for several years and am not entirely up to date. Please don’t take this as me speaking as an actual authority.

                                                                  1. 3

                                                                    Same goes for availability: if your node goes down, the rest of the network stays up but that doesn’t matter to you because your node is down.

                                                                    This is why I found a lot of the early informational material surrounding Mastodon somewhat dishonest. IIRC the docs basically said the same thing, that mastodon can’t be shut down because it’s distributed. But the average user doesn’t know the difference between the network and their node, and I really don’t care about AP-compliant servers existing somewhere on the internet if I’ve just lost all my friends and posts.

                                                                    1. 2

                                                                      Of course that google/gmail’s security team is better than you are. Same for many other services. But the value of hacking a service is the sum of all the values of the users (roughly speaking) which makes big services much more interesting. Getting into smaller ones might be less (economically) interesting.

                                                                      People who are or can be special targets probably shouldn’t self host and one of the reason is that they’re basically very interesting and easy enough to breach.

                                                                    1. 4

                                                                      I don’t know much about web programming or javascript and was unable to understand enough from the article to answer this question: Wikipedia is pretty much the only site apart from minimalist text only blogs where I cant tell the difference between the site with noscript blocking everything, and the site with everything unblocked. What is the javascript that is running even for, and how is turning all of it off completely not a simpler way to optimise?

                                                                      And while we are on the subject, how is it that the visual editor works without javascript enabled?

                                                                      Sorry if I am being dense.

                                                                      1. 4

                                                                        Wikipedia is pretty much the only site apart from minimalist text only blogs where I cant tell the difference between the site with noscript blocking everything, and the site with everything unblocked.

                                                                        i.e. doing progressive enhancement right. If JavaScript fails, you mostly can’t tell.

                                                                        What is the javascript that is running even for

                                                                        Looking at a random Wikipedia page, you get:

                                                                        • Reference tooltips (when hovering Wikipedia links)
                                                                        • The image navigation interface (when you click on the strawberry)
                                                                        • Some things about the visual editor (maybe the search box autocomplete uses some components from there?)
                                                                        • Some tracking / analytics / et cetera code which takes the lion’s share of subsequent JS loaded

                                                                        How is turning all of it off completely not a simpler way to optimise?

                                                                        Because you still want the nice stuff, and that presumably the initial JavaScript also loads extra CSS for which you’d have to devise other ways to load conditionally / asynchronously.

                                                                        how is it that the visual editor works without javascript enabled?

                                                                        The visual editor can’t work without JavaScript, even contenteditable depends on DOM APIs for keyboard shortcuts and things like that. Turning off JavaScript from a browser extension after everything has loaded, without reloading the page, does not preclude the existing JavaScript from running.

                                                                        1. 3

                                                                          Can’t speak to exactly what the JS does, but I know the visual editor is based on the contenteditable HTML attribute. Years ago I attended a talk from some Wikimedia folks on how they built the visual editor, and at least at the time contenteditable apparently had a huge number of gotchas, mostly related to quirks in various browsers and/or underspecified edge cases. My guess is that the visual editor works without JavaScript, but not as well because the JS can’t be used to paper over these inconsistencies. I don’t actually know, though. As I said this was several years ago (2015 or 2016, I think, at Open Source Bridge).

                                                                          1. 4

                                                                            Yes, it’s exceedingly hard to extrapolate from the basic functionality browsers offer when working with contenteditable. That’s why WYSIWYG editors that work well (e.g. ProseMirror, CodeMirror) have to handle everything themselves — off the top of my head: undo/redo history, a gazillion IME (input method editors), weird clipboard contents, OS-level / browser-extension-level things that change the DOM from under you (e.g. the Grammarly extension) — and end up with huge code-bases.

                                                                          2. 2

                                                                            Off the top of my head, here are a few things I can think of:

                                                                            • Highlighting citations when you click the [123] links to jump down

                                                                            • Those annoying summary “cards” which appear when you hover a link to another page

                                                                            • Some things related to editing

                                                                            • On mobile, the expand/collapse sections which make it so you are no longer in the same place on the page when you use Back button

                                                                            • Collapse/expand of modules like “timeline of models produced by X car company”

                                                                            So in general, little UI niceties which try to make the site more friendly and convenient, though often just create more annoyance and friction.

                                                                          1. 2

                                                                            I have commented on Purism’s customer support (twice) and build quality before. This all pretty much matches my experience. Don’t buy from Purism.

                                                                            I will say though, this review actually kind of explains why my problems with customer support were so bad. The big problem I had was an inability to get replacement parts for repair (so much for the laptop being easy to service), but if all their stuff is coming from China through a middleman in San Fransisco, that’s probably why they can’t get replacement parts.

                                                                            They did actually email me last February (so, two years and a month after the original incident) to say they could replace my screen if I wanted. I didn’t reply - I bought from System76 long ago and haven’t looked back. With a few exceptions (which I’m fine looking past), I’ve been very happy with them.

                                                                            1. 9

                                                                              Upvoted because it’s interesting, and because I think there’s a lot of truth in this piece. However, I found a lot of the assertions extreme. To pick a random example: it seems silly to say that HTTP/3 is “strictly worse” for everybody except megacorps, or that no one cares about TCP head-of-line blocking. HTTP/3 solves actual performance problems on the web.

                                                                              Now, I think it would be completely fair to say that the web isn’t the only consumer of HTTP (although, I mean, HTTP/1.1 is still available right there if you need it), or to argue that the costs of HTTP/3 outweigh the benefits. But that’s not what the author said, and it undermines their point which, again, I mostly agree with.

                                                                              1. 22

                                                                                The question is: which users actually have those problems? Which users are harmed by complicated specs that favor particular orgs, and which users benefit from shaving percents off of their server farms?

                                                                                It’s important to remember that a lot of the “problems” that get talked about in tech have important context.

                                                                                1. 10

                                                                                  Again, I mostly agree. I’m not really criticizing the point that the article made, just that I wish it had made the point in a more honest/balanced way. The hardline no-nuance stance the article takes is a disservice to the underlying point that you’re referring to. It’s much better to admit there are upsides and then say “the downsides still outweigh the upsides”.

                                                                                  1. 3

                                                                                    That’s a totally reasonable take, well put!

                                                                                  2. 5

                                                                                    Can’t speak about HTTP/3, but just as a side-note QUIC in my experience is really nice to write protocols for. It has lightweight streams that can be reliable or unreliable, and are multiplexed over a single real connection for you. This takes out a large amount of the work of defining framing and messaging, and gives you more flexibility than TCP. Maybe if we had QUIC 20 years ago we wouldn’t be shoving everything over HTTP as the easy messaging option.

                                                                                    1. 2

                                                                                      Realistically speaking, users and developers aren’t harmed by complicated specs at all, since they are protected by libraries. Complicated specs only harm library developers. It’s not like people implement HTTP/1.1 themselves. (Yes it is valuable you can do HTTP/1.1 yourself when needed, but that is not a normal scenario.)

                                                                                      Also, while this does not apply to HTTP/3, HTTP/2 is easier to handle correctly than HTTP/1.1, binary protocol vs text protocol. Text protocol is easier to experiment with, but fixed size binary protocol is overall better for production.

                                                                                      1. 7

                                                                                        Complex specs do harm users because it results in less choice in the ecosystem. Every feature like this could be valuable in itself, but increases the barrier to developing another browser, server, etc. which results in less diversity. Not all libraries can be used everywhere, and even when they can be shoe-horned in, the application may suffer in reliability or performance.

                                                                                        On the other hand, relatively simple specs like JSON or HTML give a lot of choice (HTML can at least be partially implemented in a productive way).

                                                                                        1. 7

                                                                                          Complicated specs only harm library developers.

                                                                                          That kind of harm tends to “trickle down” to developers who use said libraries, and then to their users. Complicated specs result in libraries that are slow to write, difficult to test, and very large and opaque. Even when they’re open source, it’s very hard (and usually impossible) for anyone except the people who are working on it full-time to add fixes or new features.It’s not like library developers have magical immunity to the perils of complicated specs.

                                                                                          1. 3

                                                                                            This is not always the case. TCP is a complicated spec: the original may not be, but TCP-as-currently-used certainly is. But TCP users are protected by socket API, and TCP’s complexity mostly does not trickle down to its users. Good abstractions are possible, and implementation complexity DOES NOT imply interface complexity. I detest the widespread attitude against implementation complexity.

                                                                                            1. 4

                                                                                              I don’t mean that complexity trickles down to its users, I mean that the bugs, slow improvement pace and/or frequent patching – all inherent to anything that’s difficult to implement – trickle down. Libraries don’t just pop into existence, somebody has to write them, and the people who write them are just as susceptible to making mistakes as other programmers. The more complex the specification, the more likely it is that it will have mistakes.

                                                                                              (Edit:) having spent quite a few years dealing with… not the best libraries, I think this is a pretty big deal. When something breaks in an application, you can’t just tell users that, uh, it’s not us, it’s a library we’re using that has some bugs and, uh, you know what, here’s a link to their bug tracker, go talk to them. If it’s your firmware that crashes, it’s your bug to fix, no one cares that it’s not in any code that you’ve written. The more complex the library is, the harder it is to fix, it doesn’t matter how well-written it is (although a poorly-written library definitely adds its own difficulty to that of the spec). But it’s not magical armor. Libraries have bugs, just like anything else.

                                                                                          2. 6

                                                                                            In my career, the devs I rate the most are those that understand the entire stack top to bottom on a deep level.

                                                                                            People that effortlessly fire up Wireshark or strace to get to the bottom of some strange bug.

                                                                                            Even someone whose entire work life revolves around React will at some point need to interact with a backend. If they don’t know what’s actually going on, the most trivial problem can get them stuck. Or worse, make bad unscalable arch decisions because it appears to work fine in their dev setup.

                                                                                            Libraries speed up development, but they mustn’t replace understanding.

                                                                                      1. 5

                                                                                        What still stands out are the touchpad and the speakers of the MacBook.

                                                                                        While the touchpad on MacBook is indeed amazing, surprisingly, Android (on x86) comes with a better gestures support than the stock macOS. People always forget that Android is, uh, a Linux distro.

                                                                                        I had an Android installation on MacBook Pro 2015 and everything worked flawlessly up until Oreo release where the Wi-Fi crashed the system on connection (kernel panic, seems to be specific to Macs…). Haven’t tried the new releases though, but otherwise that was the best UX out-of-the-box I’ve seen on desktop Linux. The shutdown was also instant (systemd has an excessive DefaultTimeoutStopSec=90s and macOS takes a few seconds too).

                                                                                        1. 12

                                                                                          While the touchpad on MacBook is indeed amazing, surprisingly, Android (on x86) comes with a better gestures support than the stock macOS.

                                                                                          For me it’s not the amount of gestures that win. I turn most off. It’s the precision in moving the pointer and how well macos removes accidental taps/moves.

                                                                                          This gets me every time I think I’m ready for Linux. The macos input handling is just in a league of its own. That and high dpi, shearing while scrolling and jerky scroll.

                                                                                          1. 5

                                                                                            Libinput (the default touchpad driver in most recent release on most distros) is a huge improvement over where things were even a couple years ago. At least in my case, palm rejection and accidental clicking is something I don’t worry about anymore, and input responsiveness is great.

                                                                                            Jerky scrolling and tearing are mostly a graphic driver issue. Unfortunately, it’s still pretty pervasive on some Intel GPUs.

                                                                                            1. 2

                                                                                              Sounds good! I look forward to trying out Linux soon again since Apple seems dead set on ruining macOS for me.

                                                                                            2. 3

                                                                                              I’d agree with this. For me everything else (my freedom, ability to tune things, trustable code etc.) is well worth ditching macOS, but I still miss the Mac touchpad. Natural scrolling is a big part of the reason why. GNOME has an option to change the scrolling direction so it’s “natural”, but it’s not at all the same because moving my fingers down on the touchpad does nothing for a bit, and then scrolls down a few lines - just like it would on a mouse’s scroll wheel. On the Mac the scrolling matches where your fingers move exactly, like it would on an actual touchscreen device.

                                                                                              1. 2

                                                                                                Ah yes. I’m not much of a tinkerer, but I’m totally with you on sandboxing. It’s like, let me be in charge of my computer, thanks.

                                                                                                That’s exactly it, scrolling on Linux often feel like a mouse scroll wheel, and it’s not the same.

                                                                                          1. 2

                                                                                            I don’t think it’s a fair comparison. Synology sells it’s own hardware (that the software is presumably optimised for), is expensive (relatively to NC), and the hardware is not upgradable (in most models)..

                                                                                            I built my item NAS and run unRAID and run NC. It’s okay. It feels bloated, and everything feels like halfway done. But all apps I use get constant updates so I’m sure it’ll be better next year. Heck it’s free and OSS can’t expect more.

                                                                                            As an aside, the NC mac app is garbage. Constantly crashing and not updated in months.

                                                                                            1. 1

                                                                                              I think if you compare the core functionality of Synology and NC then you’re right - one is a NAS the other is a file syncing application. However, both have apps that overlap in their feature set - contacts & calendar, file syncing, online document editing, email etc. These are the kind of tools people tend to use on their home server.

                                                                                              When comparing the overlapping features, Synology comes our way in front. Honestly, I don’t understand how NC is so popular.

                                                                                              1. 1

                                                                                                I think NC is popular because of OwnCloud but I only started using it last year. I looked at Synology and Qnap and others, and chose NC to get upgradability and no vendor-lock in.

                                                                                                1. 1

                                                                                                  The scopes are different though, and the underlying technology is different. This limits what can be done with Nextcloud compared with Synology.

                                                                                                  As an example: take a BitTorrent client. For a platform like Synology this is easy because they can take an existing BitTorrent client and integrate it in the operating system offering. For Nextcloud though, which runs within the confines of a web server and does not control the underlying operating system, this is harder. Either they have to require a bunch of manual setup to install the native client that’s required, or they have to implement a BitTorrent client in PHP which wouldn’t work that well anyway because the architecture is oriented around request/response cycles and BitTorrent fits extremely poorly into that model.

                                                                                                  A more apples-to-apples comparison, I think, would be to compare Synology with Nextcloud running on top of FreeNAS.

                                                                                              1. 10

                                                                                                My feeling is Nextcloud are compromising the quality of their core features by expanding out to try and do everything else (the shit quality apps Kev talks about). Although for the same reason I don’t mind it lacking a backup app, I’d rather use a first class backup tool outside Nextcloud than rely on them to get that right.

                                                                                                1. 1

                                                                                                  That’s a great point actually - I’d rather use a first class backup solution than a half baked one.

                                                                                                  1. 1

                                                                                                    Nextcloud the company only focuses on some of those apps, and the “Official” label in the app store doesn’t necessarily mean that the company is involved in developing that app. It can be confusing to figure this out though, especially because a lot of people from the company still support and help out with the community-developed apps even though they aren’t necessarily the company’s priority.

                                                                                                  1. 23

                                                                                                    Note that:

                                                                                                    • Browsers are pretty much already “bundled” and exist outside the traditional distribution model. Pretty much all stable distributions have to take upstream changes wholesale (including features, security fixes and bug fixes) and no longer cherry-pick just security fixes. The packaging of browsers as snaps are merely admitting that truth.

                                                                                                    • The chromium-browser deb is a transitional package so that users who are upgrading don’t get a removed Chromium. It is done this way for this engineering reason - not a political one. The only (part) political choices here are to ship Chromium as a snap and no longer spend the effort in maintaining packaging of Chromium as a deb. Background on that decision is here: https://discourse.ubuntu.com/t/intent-to-provide-chromium-as-a-snap-only/5987

                                                                                                    • Ubuntu continues to use the traditional apt/deb model for nearly everything in Ubuntu. Snaps are intended to replace the use case that PPAs and third party apt repositories are used for, and anything else that is already shipped “bundled”. For regular packages that don’t have any special difficulties packaging with the traditional model, I’m not aware of any efforts to move them to snaps. If you want to never use snaps, then you can configure apt to never install snapd and it won’t.

                                                                                                    • Free Software that is published to the Snap Store is typically done with a git repository available so it is entirely possible for others to rebuild with modifications if they wish. This isn’t the case for proprietary software in the Snap Store, of course. The two are distinguished by licensing metadata provided (proprietary software is clearly marked as “Proprietary”). This is exactly the same as how third party apt repositories work - source packages might be provided by the third party, or they might not.

                                                                                                    • Anyone can publish anything to the Snap Store, including a fork of an existing package using a different name. There’s no censorship gate, though misleading or illegal content can be expected to be removed, of course. Normally new publications to the Snap Store are fully automated.

                                                                                                    • The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.

                                                                                                    • The generally cited reason for the design of snapd supporting only one store is that store fragmentation is bad.

                                                                                                    I hope that sheds some clarity on what is going on. I tried to stick to the facts and avoided loading the above with opinion.

                                                                                                    • Opinion: Ubuntu has always been about making a bunch of default choices. One choice Ubuntu has made in 20.04 is that snaps are better for users than third party apt repositories (because the former run in a sandbox and can be removed cleanly; the latter is giving third parties root on your system and typically break the system such that future release upgrades fail). Some critics complain that users aren’t being asked before the Chromium snap is installed. But that would be a political choice. Ubuntu is aimed at users who don’t care about packaging implementation details and just want the system to do something reasonable. Ubuntu’s position is that snaps are reasonable. So it follows that Chromium packaging should be adjusted to what Ubuntu considers the best choice, and that’s what it’s doing.

                                                                                                    Disclosure: I work for Canonical, but not in the areas related to Mint’s grievances and my opinions presented here are my own and not of my employer.

                                                                                                    1. 8

                                                                                                      Thanks a lot. While I don’t agree about the opinion at all, background explanation is much appreciated.

                                                                                                      1. 7

                                                                                                        The chromium-browser deb is a transitional package

                                                                                                        I can’t speak about Mint, but in Ubuntu the chromium-browser deb installs Chromium as a snap behind the scenes.

                                                                                                        The generally cited reason for the Snap Store server-end not being open is that it is extensively integrated in deployment with Launchpad and other deployed server-end components, that opening it would be considerable work, that Canonical spent that effort when the same criticism was made of Launchpad, but that effort was wasted because GitHub (proprietary) took over as a Free Software hosting space instead, and nobody stood up a separate Launchpad instance anyway even after it was opened, so Canonical will not waste that effort again.

                                                                                                        So, unless you’ll own the market with the product it’s not worth open sourcing? IMO releasing a product open source is never “wasted effort” because it may prove useful in some capacity whether you as the original author know it or not. It may spawn other ideas, provide useful components, be used in learning, the list goes on and on.

                                                                                                        1. 9

                                                                                                          IMO releasing a product open source is never “wasted effort”

                                                                                                          It’s very convenient to have this opinion when it’s not you making the effort. People seem to care a lot about “providing choice” but it somehow almost always translates into “someone has to provide choice for me”.

                                                                                                          1. 5

                                                                                                            It’s very convenient to have this opinion when it’s not you making the effort.

                                                                                                            True. I should have worded that better. I was talking about the case of simply making source available, not all the added effort to create a community, and make a “product”, etc. I still don’t believe companies like Canonical have much a leg to stand on when arguing that certain products shouldn’t be open source when open source is kinda their entire thing and something they speak pretty heavily on.

                                                                                                            1. 4

                                                                                                              Yep. Just to be clear, open-sourcing code isn’t free. At an absolutely bare minimum, you need to make sure you don’t have anything hardcoded about your infra, but you’ll actually get massive flak if you don’t also have documentation on how to run it, proper installation and operation manuals for major platforms, appropriate configuration knobs for things people might reasonably want to configure, probably want development to happen fully in the open (which in practice usually means GitHub), etc.—even if you yourself don’t need or want any of these things outside your native use case. I’ve twice been at a company that did source dumps and got screamed at because that “wasn’t really open-source.” Not that I really disagree, but if that wasn’t, then releasing things open-source is not trivial and can indeed very much be wasted effort.

                                                                                                              1. 3

                                                                                                                That’s true, but that cost is vastly reduced when you’re building a new product from scratch. Making sure you’re not hardcoding anything, for example, is much easier because you can have that goal in mind as you’re writing the software as opposed to the case where you’re retroactively auditing your codebase. Plus, things like documentation can only help your internal team. (I understand that when you’re trying to get an MVP out the door docs aren’t a priority, but we’re well past the MVP stage at this point.)

                                                                                                                If the Snap Store was older I would totally understand this reasoning. But Canonical, a company built on free and open source software, really should’ve known that people were going to want the source code from the start, especially because of their experience with Launchpad. I think they could have found a middle ground and said look, here’s the installation and operation manuals we use on our own infra. We’d be happy to set up a place in our docs that adds instructions for other providers if community members figure that out, and if there’s a configuration knob missing that you need, we will carry those patches upstream. Then it would have been clear that Canonical is mostly interested in their own needs for the codebase, but they’re still willing to be reasonable and work with the community where it makes sense.

                                                                                                          2. 4

                                                                                                            Opinion: Ubuntu has always been about making a bunch of default choices. One choice Ubuntu has made in 20.04 is that snaps are better for users than third party apt repositories (because the former run in a sandbox and can be removed cleanly; the latter is giving third parties root on your system and typically break the system such that future release upgrades fail).

                                                                                                            I think this is a fine opinion but it seems contradicted by the fact that some packages are offered by both the off-the-shelf repos and snap.

                                                                                                            1. 3

                                                                                                              I don’t see a contradiction. Can you elaborate?

                                                                                                              I did say “better than third party apt repositories”. The distribution has no control over those, so what is or isn’t available in them does not affect my opinion. I’m just saying that Ubuntu has taken the position that snaps (when available) are preferable over packages from third party apt repositories (when available). And what is available through the distribution’s own apt repository is out of scope of my opinion statement.

                                                                                                              1. 2

                                                                                                                Ubuntu has always been about making a bunch of default choices.

                                                                                                                What is the default choice when I type jq in bash?

                                                                                                                Command ‘jq’ not found, but can be install with:
                                                                                                                sudo snap install jq # version 1.5+dfsg-1
                                                                                                                sudo apt install jq # version 1.6-1

                                                                                                                It’s fine and well opinionated choice that ubuntu prefers it for third party things. I feel like a lot of first party supported utilities are not well opinionated and i’m left thinking about trade-offs when i go with one over the other.