1.  

    Maybe Google should show only ads to people using the Chrome browser while being logged in to their Google account. Would become easier to stop fraud and would leave the rest of us alone! :-)

    1.  

      If you think about it, at the moment Google has a significant incentive to get people to use chrome while logged in, and most people from “the rest of us” group mostly see it as misaligned with values and ideals we would prefer and criticise a lot of their decisions. Thinking about your suggestion a few steps into the consequences of pushing that incentive much further in the “wrong” direction is really quite scary… maybe now is already them relatively leaving us alone 😬

    1. 3
                   X509v3 Basic Constraints: critical
                      CA:TRUE, pathlen:0
      

      That doesn’t look good for the leaf certificate…

      1. 2

        Whoops. Thanks for finding that! Just pushed a typo fix.

      1. 3

        The last part of the post seems to talk about reinventing ZeroTier. I am curious for any details on those plans and whether they can do away with (centrally managed) controller nodes.

        Edit: oh it is already there!

        1. 2

          https://tailscale.com/ is the product/service they have built around WireGuard.

        1. 12

          I also recently purchased and received a Pinebook Pro, and I love it!

          I cannot recommend the Pinebook Pro for a newbie (at least, not without local tech support). You need to have some experience with Linux and the command line to make it work. When I updated the keyboard + trackpad firmware as directed, I didn’t read the directions carefully enough, it turns out the keyboard stops working halfway through the update process! An external keyboard is required to finish the update, and I didn’t have one lying around at home, I had to go into the office to borrow one. That said, the Pinebook Pro isn’t advertised as newbie-friendly, so I don’t currently consider this a problem.

          My one reservation about this computer as a tool (rather than a toy) is that I worry about keeping the software up-to-date and secure. I’m concerned that if e.g. Firefox isn’t updated immediately, it is a security risk, and I worry about logging into anything that matters. For example, Pinebook Pro’s default Debian build https://github.com/mrfixit2001/debian_desktop (which is a volunteer project) just updated to Firefox 71. What’s the difference between that and the latest Firefox 72? Looks like a bunch of security fixes: https://www.mozilla.org/en-US/security/advisories/mfsa2020-01/

          I don’t need that much in the way of specs to have a functional computer. I need a web browser, and a terminal with SSH. There are a number of optional applications I’d like to have, like my preferred editor Atom, Signal Desktop and Slack for messaging, but I can survive without them. But if I’m afraid to use the browser because it might not be entirely secure…

          I don’t expect browser updates to be a long-term problem, I’m sure I’m not the only person who wants an up-to-date browser, and they seem to be working on it. That said, it is a problem I currently have.

          I’d also feel a little better if the default OS was a commercial build rather than a volunteer project, but I suppose I should have shelled out the cash for System76 or something if I really cared about that.

          1. 4

            Fedora is getting better at supporting ARM64 (aarch64). It supports the Pi 3B+ now since a few releases, which gives you all the same software releases (including kernel!) as on x86_64. Pinebook Pro support is planned.

            1. 3

              For example, Pinebook Pro’s default Debian build https://github.com/mrfixit2001/debian_desktop (which is a volunteer project) just updated to Firefox 71.

              Do you mean that it doesn’t use the official Debian package repositories? Or just that it’s an unofficial installer? Because if it’s the former, that’s a really strange claim I would want to look into more, and if it’s the latter, then it’s completely unsurprising.

              1. 7

                When I look in sources.list, the apt sources are the standard debian stretch repositories.

                However, it appears to be using a custom kernel, custom builds for Chrome + Firefox etc., and a bunch of other tweaks. This non-standard software is updated via a “custom updater capable of updating uboot, the Linux kernel and numerous packages” in the form of a shell script with a handy icon in the toolbar.

                So you can install normal software via apt, but the tweaked software optimized to work on this hardware is installed/updated through the custom updater.

                Some details here: https://forum.pine64.org/showthread.php?tid=7830

              2. 2

                I put Manjaro/XFCE on mine and it runs better than the default Debian build.The pinebook pro is also officially supported by Manjaro, so that feels good.

                Highly recommend it if you are a fan of Manjaro or Arch.

                1. 1

                  To be clear, when I said “I’d also feel a little better if the default OS was a commercial build rather than a volunteer project”, I don’t mean that I think the volunteers are doing bad work. I just mean that I worry about them being fairly compensated for their work, and I worry that without financial support they may not be able to maintain the software in the long run.

                1. 25

                  To be fair, I find it hilarious that every browser includes the “Mozilla” string in its user agent, dating from the late 90’s. As much as it pains me to say it, Google may be right here: the header is at best vestigial.

                  1. 2

                    I think it is weird that they still do; does anyone bother checking that part when sniffing anymore? I’d be surprised if anyone has for the last fifteen years.

                    1. 2

                      I know there are webmasters that use its presence to distinguish between bots (which typically don’t have it) and browsers (which usually do). It’s a heuristic, but it’s actually really good.

                      1. 4

                        I had to change my feed reader’s user-agent to lie because of this. It’s nonsensical, of course — RSS and Atom feeds are made for bots!

                        1. 2

                          Looks like a configuration error from the Web server or app. Maybe they just tell Nginx or their app to deny anything which is not a browser, forgetting to handle special cases like RSS.

                          1. 1

                            Looking at the code it was actually a request to SquareSpace, and the poison seemed to be mention of “Twisted”. Best guess they are trying to ban Scrapy which uses Twisted internally.

                            I’ve also seen CDNs reject requests when the User-Agent string contains “python” or lacks “Mozilla”. I guess lying is just part of HTTP these days.

                        2. 6

                          between polite bots and (browsers and evil bots) <3

                          1. 3

                            Perfect is the enemy of the good. Anyone might come into my house and rob me, but if someone knocks on my door and tells me they’re going to rob me, I’m still not going to let them in just because they asked permission.

                            1. 1

                              If they’ll say it in a certain way, and they will act in a certain way, you will be thankful for them for the opportunity for them to rob you.

                              Well, not you in particular, but people in general.

                          2. 1

                            indeed, but you don’t check for the Mozilla thing there!

                          3. 1

                            GitHub uses User Agent sniffing. I set my User Agent to “Firefox” (general.useragent.override) and some features on the site no longer work and GitHub complains that it doesn’t support old browsers.

                        1. 13

                          Even experts don’t know everything. This seems to me like relying on Actor X to talk to you about random subject Y, instead of a subject matter expert on subject Y.

                          That said, I don’t blame him for being so cautious about integrating ZFS into the Linux kernel all proper like. Oracle’s past history has shown they are very eager to do everything possible to try and make a buck. But ZFS was released publicly before Oracle bought it, so they would have to do a lot of magic legal loopholes to get around the license Sun released ZFS under, before Oracle bought Sun.

                          ZFS is awesome, well maintained and supported, and, from my perspective very much part of the boring technologies one should definitely explore for use in your stack(s). Also except for FAT32, I think it’s the only filesystem that has implementations across all the major OS’s (windows, macOS, linux, BSD, UNIX, etc)

                          1. 22

                            But ZFS was released publicly before Oracle bought it, so they would have to do a lot of magic legal loopholes to get around the license Sun released ZFS under, before Oracle bought Sun.

                            so was the jdk, but that has never stopped oracle as far as I understand it

                            1. 4

                              Agreed. I totally get not wanting to merge the GPL’d Kernel with ZFS’s unique license, based on Oracle’s past behaviour…

                              1. 4

                                so was the jdk, but that has never stopped oracle as far as I understand it

                                That’s actually a really great non ZFS example: Oracle got all stupid with the JDK, trying to charge for it, so the community forked OpenJDK totally free of Oracle’s ‘stewardship’ and it’s been going gangbusters ever since.

                                1. 3

                                  That’s an oversimplification, OpenJDK still is developed primarily by Oracle employees and existed before Oracle bought Sun. The only thing that changed is that previously it was a bad idea to use the official JDK over OpenJDK and now it’s a disastrously bad idea.

                                  1. 2

                                    Quite right.

                                    So the risk now is that Oracle might de-fund OpenJDK development.

                                    Given the VAST amount of critical infrastructure built atop the Java ecosystem that still exists, I’ll bet you dollars for donuts that the community would LEAP to the rescue if that happened.

                                    Everybody likes to slag Java, but most of those same people would be UTTERLY SHOCKED at how much of what they love is built on that ecosystem.

                              2. 6

                                I’m reading this as do not use ZFS on Linux both for technical and legal reasons. I totally agree with Linus on that, I’m using XFS which happens to be the CentOS default ;-) Using ZFS might be totally fine on FreeBSD though, don’t know, but hear good stories about it!

                                1. 12

                                  well, zfs on freebsd now uses zfs on linux, so i’m not really sure where the “unmaintained” part comes from. i’m using zfs on linux for quite a while now as data-dump, and it is really solid and easy to use.

                                  my favourite example: when you have a disk which has problems but isn’t dead yet, you can add a new drive, resilver and then unplug the failing drive. no need to run with a missing drive. this is one command, quoting the manpage:

                                  zpool replace [-f] pool old_device [new_device]

                                  Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device.

                                  regarding “buzzword”: there are no equivalent options for linux imho, btrfs isn’t there and won’t be for a long time, if ever. xfs is ok, but has a completely different scope, being “just” a filesystem. zfs replaces the whole lvm+mdraid stack, and wraps it into usable tools without that many footguns.

                                  the amount of fud spread concerning zfs is really amazing. i’m a bit sad that linus apparently buys into it, i understand the fear about oracle licensing fsckups, though.

                                  1. 3

                                    zfs replaces the whole lvm+mdraid stack

                                    Along with cryptsetup.

                                    1. 1

                                      Yes, forgot that :)

                                  2. 6

                                    I read that is Linus’s perspective, but I(and many others) have a different perspective. I don’t see any technical reason to not use ZFS on Linux, it’s well-maintained, stable and pretty much boring tech these days.

                                  3. 5

                                    I don’t blame him neither - there must have been a (legal) reason for Apple dropping it and coming out with APFS. Nonetheless it’s quite a strong opinion from someone who never used it.

                                    1. 8

                                      ZFS had a lot of baggage that they didn’t want because they’d never expose it to the users anyway. And it wouldn’t have scaled to work on devices like the Apple Watch because of the ARC.

                                      1. 1

                                        Apple has implemented ZFS on Mac OS X. I don’t remember exactly whether it made to a public beta, or just it was leaked it exists, but apparently at some point Apple has decided they want ZFS, they just never shipped it to users.

                                        1. 2

                                          Yes, it was a thing when I was there the first time; but the word on the street was that licensing issues basically killed it, and it was never a good fit for Apple’s primary use case for a file system.

                                          1. 1

                                            I think the problem was that NetApp sued Sun over alleged violation of their patents by ZFS:

                                            In the meantime, Sun and NetApp had been locked in a lawsuit over ZFS and other storage technologies since mid-2007. While Jonathan Schwartz had blogged about protecting Apple and its users (as well as Sun customers, of course), this likely led to further uncertainly. On top of that, filesystem transitions are far from simple. When Apple included DTrace in Mac OS X, a point in favor was that DTrace could be yanked out should any sort of legal issue arise. But once user data hit ZFS, it would take years to fully reverse the decision. While the NetApp lawsuit never seemed to have merit (ZFS uses unique and from-scratch mechanisms for snapshots), it indisputably represented risk for Apple.

                                            https://arstechnica.com/gadgets/2016/06/zfs-the-other-new-apple-file-system-that-almost-was-until-it-wasnt/

                                          2. 1

                                            Yes I still have a copy of the beta where they included ZFS support.

                                            You can still have ZFS on MacOS, just not for your /

                                        2. 3

                                          Apple dropping it and coming out with APFS

                                          AFAIK (and I am no expert) ZFS has little to absolutely no-sense on home devices which Apple is offering. ZFS is meant for servers mostly. In that case APFS makes more sense in this case.

                                          1. 2

                                            ZFS would make sense on macOS, but agreed, ZFS makes some (but little) sense on ipadOS and zero sense on watchOS. ZFS is most comfortable with ~ 1GB(or more) of ram to itself, which is hard to come by on the smaller devices.

                                        3. 3

                                          “so they would have to do a lot of magic legal loopholes to get around the license Sun released ZFS under, before Oracle bought Sun.”

                                          Them making API’s copywritten was such a magic loophole that I started recommending against depending on anything owned by a company that does patent suits. Those less-popular, foundation-run languages, OS’s, etc suddenly look worth the extra effort.

                                          1. 3

                                            Agreed on it being a major magic loophole. That’s not quite resolved yet, the supreme court hasn’t ruled. It will be very, very interesting to see what side the supreme court comes down on. Hopefully against Oracle, but who knows.

                                            ZFS is an interesting case, because basically everyone that worked on ZFS @ Sun now plays with the OpenZFS version, which is under the Sun license(CDDL), and can’t get back into Oracle’s version(since they closed-source their version) so there are 2 totally different versions of ZFS that are only compatible if you work at it. The ZFS everyone uses, and the ZFS Oracle supports, that nobody uses(again unless they have to for non-technical reasons). The Oracle version doesn’t offer any of the good bits that the OpenZFS version has, like encrypted data, multi-platform support, etc, etc.

                                            I also agree on your recommendations. Suddenly the BSD’s are looking better and better every day :)

                                            OpenJDK is pretty much the default Java these days isn’t it? I don’t think anyone in server land is actively installing Oracle’s java anymore, unless they have to(for generally non-technical reasons).

                                            1. 3

                                              Them making API’s copywritten was such a magic loophole that I started recommending against depending on anything owned by a company that does patent suits.

                                              IANAL, but I think the problem for Oracle would, in this case, be that they themselves (actually Sun) released ZFS under an open source license, plus that the CDDL includes a patent grant. This is different from the Java case, because Google did not use OpenJDK, but based their implementation on Apache Harmony. If APIs are copyrightable, then you’d think that Sun releasing OpenJDK as GPL would at least license use of the API in that implementation (and in this case ZFS).

                                              Of course, they could sue ZoL for infringements of patents in newly-added code. But then they could probably sue developers of other filesystems with those patents as well.

                                            2. 2

                                              Oh I totally get it. I think his stance makes a lot of sense, but it will be interesting to see how or if things change when Ubuntu makes ZFS mainstream.

                                              It’s already available as a checkbox in the 20.04 installer, and I suspect as later releases happen it will become the default.

                                            1. 1

                                              A FRITZ!Box modem/router can print a configuration page with the WiFi info that also contains a QR code. I’m using diceware for generating a passphrase for WiFi, still easy to use by guests after explaining it actually does contain spaces ;-)

                                              1. 3

                                                Adding cgit is also a very simple and nice addition to running your own git server.

                                                1. 2

                                                  Also worth mentioning as part of the git ecosystem is gitweb, which is provided by default.

                                                  1. 1

                                                    gitweb has a pretty unintuitive and heavy UI, but its biggest selling point is that it’s bundled within git.

                                                    cgit is a cleaner design, but still not up to my tastes (as I’d prefer to have a “simpler” UI), and since it’s not natively provided, it’s a bit more painful to obtain it.

                                                    I may work on some cgit-like daemon or CGI software to have a clean / simple UI to browse git repos, kinda like cgit.

                                                    Note: all the recommendations and ideas that have been shown here and on dev.to have motivated me to start working on a part 2, which will contain concepts such as annexes, hooks, the git daemon / gitweb daemon, and some tips and tricks I found useful.

                                                    1. 3

                                                      You should give fudge a try! :)

                                                      1. 2

                                                        I didn’t know about fudge, and after looking at the code, it looks like it needs a bit of rework (e.g. you are forced to use the YAML format, and the server only listens to localhost:8080 and cannot be configured), but it’s a really good start, thanks!

                                                        1. 2

                                                          I have some time to work on it over the holidays, so feel free to open issues! I picked YAML for the configuration format because I was familiar with it, though I wouldn’t mind adding support for another format.

                                                          1. 1

                                                            I guess I’ll fork it to have the first changes I’d like to have, then submit a PR so we can discuss what could be integrated right into fudge.

                                                1. 6

                                                  It almost seems as something you’d advertise because you want your opponent to use it so it becomes easier to take down…

                                                  Unfortunately, complexity has become a bit of a bragging point. People boast to one another about what’s in their ‘stack’, and share tips about how to manage it. “Stack” is the backend equivalent to the word “polyfill”. Both of them are signs that you are radically overcomplicating your design.

                                                  Source: https://idlewords.com/talks/website_obesity.htm

                                                  1. 18

                                                    I’ve worked on an open source project. Not so tiny, it used to be preinstalled with several major distros, and is still quite popular.

                                                    Early 2018 we had a major CVE, with remote code execution. We had a patch ready within of 8 hours of discovery, had it tested and in our official releases within of a few days.

                                                    Debian took over a month to patch it (and continued using an old version with major bugs, only patching security issues themselves). And they were the fastest. Alpine 3.7 was the first to ship the fix, and that took an eternity. Previous alpine versions (at the time still officially supported) never got the patch.

                                                    Now, we’re moving towards snap/flatpak for desktop and docker for server, and building our own packages and bundles, because distro maintainers are basically useless, always ship ancient broken versions, users come to us to complain about stuff being broken (and distros refuse to ship bugfixes or versions from this decade), and the maintainers are never reachable, and even security updates are shipped at glacial speed.

                                                    Honestly, distro maintainers are a massive security risk, and after this experience, I’m kinda mind blown.

                                                    1. 9

                                                      As an Arch packager, I can’t help but feel a little bit offended by what you said there. >:(

                                                      1. 11

                                                        Arch is actually one of the few distrso where this issue never existed - but that’s because arch, being rolling release, actually just uses our upstream sources, and updates frequently and reliably.

                                                      2. 7

                                                        because distro maintainers are basically useless

                                                        That’s quite an offensive statement.

                                                        1. 5

                                                          If major software that’s preinstalled and in the default start menu of Kubuntu is so outdated that it has remotely exploitable bugs, months after developers have released patches for all version branches, including the one used by Debian/Ubuntu/etc, then how can you really trust the packages installed on your system?

                                                          How many programs from the repos do you have installed which are not that common, or complicated to package. Are you sure they’re actually up to date? Are you sure there are no vulnerabilities in them?

                                                          Ever after this, I can’t trust distros anymore.

                                                          1. 3

                                                            And that makes distro maintainers basically useless?

                                                            1. 7

                                                              Yes. If there’s no practical value add, that statement is true.

                                                              It’s harsh to take, but yes, it’s okay to ask groups that insist on their status - especially in a role prone to gatekeeping - to stand for their value.

                                                              1. 3

                                                                If you can’t trust software distributed by your distro to be up-to-date and safe, what use does it have then? Stability is never more important than safety.

                                                                The whole point people use distributions, and especially reputable ones, is because they want to ensure (a) stuff doesn’t break, and (b) stuff is secure.

                                                                1. 2

                                                                  If you can’t trust software distributed by your distro to be up-to-date and safe, what use does it have then?

                                                                  Of course packagers try to keep stuff up to date and secure, but a) things move fast, and spare time and motivation can be at a premium; and b) there’s too much code to audit for security holes.

                                                                  distro maintainers are basically useless

                                                                  Come on now… I assure you, you’d be pretty upset if you had to build everything from source.

                                                                  1. 4

                                                                    Of course packagers try to keep stuff up to date and secure, but a) things move fast, and spare time and motivation can be at a premium; and b) there’s too much code to audit for security holes.

                                                                    And this is where @arp242’s sentiment comes from. “In a world where there is a serious shortage of volunteers to do all of this, it seems to me that a small army of ‘packagers’ all doing duplicate work is perhaps not necessarily the best way to distribute the available manpower.”

                                                                    1. 1

                                                                      In a world where there is a serious shortage of volunteers

                                                                      This is false. All too often it is difficult to find good software to package. A lot of software out there is either poorly maintained, or insecure, or painful to package due to bundled dependencies, or has hostile upstreams, or it’s just not very useful.

                                                                      It’s also false to imply that all package maintainers are volunteers. There are many paid contributors.

                                                                    2. 1

                                                                      Come on now… I assure you, you’d be pretty upset if you had to build everything from source.

                                                                      I don’t necessarily have to — the distro can provide a clean base with clean APIs, and developers can package their own packages for the distro. As some operating systems already handle it.

                                                            2. 3

                                                              Various distributions, including Debian, backport security fixes to to stable versions even when upstream developers don’t do it. It’s not uncommon that the security fixes are released faster than upstream.

                                                              Your case is an exception. Sometimes this can be due to applications difficult to package or difficult to patch or low on popularity.

                                                              Besides, it’s incorrect to assume that the package mantainer is the only person doing security updates. Most well-known distributions have dedicated security teams that track CVEs and chase the bugs.

                                                              1. 1

                                                                We already provide backported security fixes, as .patch simply usable with git apply, and provide our own packages for old and recent branches. It’s quite simple to package too. Popularity, well, it was one of the preinstalles programs on Kubuntu, and is in Kubuntus start menu (not anymore recently, but on older versions it still is).

                                                                The fact that many distro maintainers still take an eternity updating patches, and sometimes not even apply those, makes relying on distro packages quite an issue. I don’t trust distro maintainers anymore, not after this.

                                                              2. 3

                                                                Honestly, distro maintainers are a massive security risk, and after this experience, I’m kinda mind blown.

                                                                I think this is mostly because you have a one-sided experience of this and it’s most likely a bit more nuanced and down to several factors.

                                                                One of them being that the CVE system is broken and hard to follow. How did you disclose and announce the CVE and fix? Did the patches need backports for the given release and where those provided? I don’t know the CVE number, so this is hard to followup on. But the best approach is to announce on a place like oss-sec from open-wall and it should be picked up by all distribution security teams.

                                                                The other side of this, which is what distribution maintainer see, but few upstreams realize, is patching dependencies is where most of the work is done. Distributing your app as a snap/flatpak works great if you also patch the dependencies and keep track of security issues with those dependencies. This is where most upstreams fails, and this is where distribution maintainers and the distro security teams improve the situation.

                                                                1. 1

                                                                  The other side of this, which is what distribution maintainer see, but few upstreams realize, is patching dependencies is where most of the work is done. Distributing your app as a snap/flatpak works great if you also patch the dependencies and keep track of security issues with those dependencies

                                                                  That’s why, if you ever build such images yourself, you need to automate it, have it as CI, and update those dependencies at least daily, and generate a new image whenever new dependencies are available. Obviously, you need automated tests in your build procedure to ensure everything still works together, as sometimes some dependencies break important stuff even in patch releases.

                                                                  How did you disclose and announce the CVE and fix? Did the patches need backports for the given release and where those provided

                                                                  We provided patches for every version distros used, as nice patch files that could directly be applied with git apply, and in addition to the more common ways, we also directly contacted the package maintainers for our package for the important distros via email or instant messaging.

                                                                  In general, personally, I’m not a fan of the stable model anyway, though. We’ve done great work to ensure the software stays 100% binary compatible for all its protocols since 2009, we support every supported version of debian and ubuntu even with our absolutely newest builds, and yet, in the end, it’s the distro maintainers shipping not only outdated versions (apparently some users prefer buggy old versions), but also take time to apply security fixes.

                                                                  1. 2

                                                                    That’s why, if you ever build such images yourself, you need to automate it, have it as CI, and update those dependencies at least daily, and generate a new image whenever new dependencies are available. Obviously, you need automated tests in your build procedure to ensure everything still works together, as sometimes some dependencies break important stuff even in patch releases.

                                                                    Which again, few upstream do this, and they surely do not keep an eye on this at all. You sounds like a competent upstream and it’s nice when you encounter them :)

                                                                    We provided patches for every version distros used, as nice patch files that could directly be applied with git apply, and in addition to the more common ways, we also directly contacted the package maintainers for our package for the important distros via email or instant messaging.

                                                                    And this is how you should proceed. I would however contact the linux distro list if it’s a widely used piece of software multiple distributions package, and the CVE is critical enough. https://oss-security.openwall.org/wiki/mailing-lists/distros

                                                                    In general, personally, I’m not a fan of the stable model anyway, though. We’ve done great work to ensure the software stays 100% binary compatible for all its protocols since 2009, we support every supported version of debian and ubuntu even with our absolutely newest builds, and yet, in the end, it’s the distro maintainers shipping not only outdated versions (apparently some users prefer buggy old versions), but also take time to apply security fixes.

                                                                    The work is appreciated, but I’ll still urge you to not let one bad experience ruin the whole ordeal. Distribution security teams is probably one of the least resourceful teams and sometimes things do fall between two chairs.

                                                                    1. 3

                                                                      The work is appreciated, but I’ll still urge you to not let one bad experience ruin the whole ordeal. Distribution security teams is probably one of the least resourceful teams and sometimes things do fall between two chairs.

                                                                      But given that the main argument of distros is security, that statement flies directly in the face of their promises.

                                                                      1. 2

                                                                        But given that the main argument of distros is security, that statement flies directly in the face of their promises.

                                                                        I don’t think it’s the main argument, but surely one them. If you want to be completely covered you need a well paid team able to respond. You wont get this with community based distribution, we are unpaid volunteers, just like most upstreams. You’ll have to use something backed by a paid team if you expect premium service and full coverage.

                                                                        Anything else is only on a best effort basis. The CVE system is sadly hard to navigate, ingest and process. Some things are going to bubble up faster, and something is going to be missed.

                                                                        1. 2

                                                                          I have absolutely no issue with all your statements, but it is a cornerstone argument.

                                                                          I’m fine with community distributions, if they own it, and agree that paid distros are a good way to go. RHEL licenses are actually worth their money.

                                                                          I disagree with the reading of best-effort, though, because it goes both ways. If your work is impacting others, either through making them have more support requests or slowing down their iteration speed, you need to make sure you don’t add undue labor.

                                                                2. 3

                                                                  With this attitude, which a lot of developers seem to have nowadays, it doesn’t make sense to have your software included in distributions. As a packager I’d call this a hostile upstream… Just distribute it as a flatpak and/or snap and be done with it.

                                                                  Relevant here may be a blog post from an upstream fully embracing the distribution instead of fighting it: https://www.enricozini.org/blog/2014/debian/debops/

                                                                  1. 3

                                                                    It allows me to rely on Debian for security updates, so I don’t have to track upstream activity for each one of the building blocks of the systems I deploy.

                                                                    That’s exactly what I used to believe in, too, but after this experience, the facade has cracked. I can deal with 90% of my packages being years out of date and full of bugs because the distro wants to be stable and refuses to apply bugfixes or update to newer versions, but if security updates aren’t reliably applied even if they have a CVE (and debian just ignores issues entirely if they have no CVE), then how can one still trust the distro for security updates? Having a remotely exploitable unauthenticated DoS if not even RCE in a publicly facing software for 30 days is absolutely not fine.

                                                                    As a packager I’d call this a hostile upstream… Just distribute it as a flatpak and/or snap and be done with it.

                                                                    We actively maintain all version branches, and provide even backported security patches as nice little .patch file even for all the major.minor.patch releases debian/ubuntu still use. You can build it nice and simple, you just have to apply one little patch. It’s not like this we’ve been actively hostile - what more should we have done, in your opinion?

                                                                    1. 2

                                                                      how can one still trust the distro for security updates?

                                                                      Fair enough. If they are not applied. I personally know at least one Debian package maintainer (not me, I don’t like Debian) that takes excellent care of their packages, including in the stable releases. So it may depend on the maintainer. But maybe that is your point, that there is no universal standard for maintainers…

                                                                      what more should we have done, in your opinion?

                                                                      I don’t know this specific case. There are a number of other ‘historical’ cases where packagers gave up on packaging ‘upstream’ software, e.g. https://www.happyassassin.net/2015/08/29/looking-for-new-maintainer-for-fedora-epel-owncloud-packages/. I also wrote a blog post about it in 2016: https://www.tuxed.net/fkooman/blog/owncloud_distributions.html I guess the best one can do is follow these discussions and if possible make it easier for distributions to package the software. Especially the ownCloud case back then bugged me a lot. But as you can see from some other people in those discussions, we just gave up on ownCloud and used something else instead…

                                                                1. 1

                                                                  That’s great! I’m also following the Pi 4 upstreaming erffort here. So hopefully soon we’ll be able to run unmodified aarch64 distributions like Fedora and Debian on it!

                                                                  1. 1

                                                                    Just closed my account for good measure. Hosting my own Git repositories with cgit.

                                                                    1. 8

                                                                      Limit the number of dependencies (and dependencies of dependencies) of your software as much as possible. If you use dependencies, make sure they are well supported, or be willing, and able! to take over their maintenance when needed…

                                                                      Check what is already packaged i.e. language(s), frameworks, dependencies, in your favorite (LTS) OS(es) and use those (versions), e.g. check what CentOS and Debian/Ubuntu are doing and target those specifically. It is not always needed to target the latest and greatest…

                                                                      Make it possible to run your included tests with system libraries instead of your (bundled) copies as well, the packager can then make sure everything works when using the already packaged dependencies and run the test during the build stage, e.g. %check in RPM spec files…

                                                                      1. 4

                                                                        One good way to check that is to use Repology.

                                                                      1. 1

                                                                        What is the benefit to the footer being completely unverified?
                                                                        Seems to me that it would make the footer both untrustworthy and potentially dangerous (exposes parsing to unverified input).

                                                                        Another aspect of jwt that I always disliked, was that reserved keys are mixed in with data keys in the claims section. Why not just have claims be a separate section entirely from data, or at the very least, a dedicated data: {} subsection?

                                                                        (Also wish the overall encoding was using something like tnetstrings/netstrings instead of json, with just the dedicated data section using json, but I guess json is so ubiquitous these days it is more or less expected)

                                                                        1. 2

                                                                          The footer is signed, just not encrypted. So it is verified.

                                                                          1. 3

                                                                            Yes, but if you use the footer to store the key ID, e.g. {"kid": "foo"} you do first need to parse JSON before you can select the key to verify the signature over the footer…

                                                                            1. 1

                                                                              Ah, that’s good to hear. For some reason the post made me think it wasn’t signed, just base64 encoded.

                                                                          1. 2

                                                                            Is this not simply “librewashing” a bad idea? i.e. giving proponents of centralized DNS a way to claim that it is not just Google and Cloudflare running this? Or am I being too cynical here?

                                                                            1. 1

                                                                              I think that’s too cynical. How is standing up a service to compete with Cloudflare and Google bad for centralization?

                                                                              What would be better is if 50 other organizations did the same thing.

                                                                              1. 2

                                                                                How is standing up a service to compete with Cloudflare and Google bad for centralization?

                                                                                That was not exactly my point. My point is that is irrelevant to have mutiple copies of a bad idea. No matter how many “copies” of a DoH service you have, it remains a force of centralization. Sure there will be many at first, but in the end only 2 or 3 will get serious adoption.

                                                                                Would one deploy DoH in a decentralized style, i.e. every ISP deploys a DoH server for their customers, DoH brings you exactly nothing compared to DoT or even plain DNS. DoH encourages further centralization, and that is the bad thing here.

                                                                                If you are operating in a hostile network, e.g. when traveling, DoH also doesn’t really solve your problem. It may at the moment, but not when the capabilities of attackers catch up. Back to square one. Only a real VPN to a “trusted” endpoint would help in that scenario.

                                                                                So DoH does not solve any real problem in a satisfactory way, instead it encourages further centralization.

                                                                                1. 3

                                                                                  No matter how many “copies” of a DoH service you have, it remains a force of centralization.

                                                                                  You’ve made this assertion, but I don’t see you doing anything to support it, notwithstanding your point that DoH doesn’t offer anything over DoT (which is moot and anyway this service also provides DoT).

                                                                                  So, again, why is DoH bad for centralization?

                                                                                  1. 1

                                                                                    So, again, why is DoH bad for centralization?

                                                                                    Ah, I think the was some confusion here. I consider centralization a bad thing, not a good thing… DoH is good for centralization indeed!

                                                                                    1. 2

                                                                                      You can swap the question if you want, but it’s still a question you haven’t answered. Why is DoH good/bad for centralization?

                                                                            1. 1

                                                                              Man people seems to love grabbing their pitchfork without even wanting to ear Google’s explanation.

                                                                              1. 20

                                                                                We did hear Google’s official explanation though? They think uBlock Origin does too many things and should be split up.

                                                                                Sure, it’s complete BS and probably automated, but that was their response even when they were inquired about the decision.

                                                                                1. 4

                                                                                  It’s a canned message, and got approved a few days later. Perhaps the employee sneezed and clicked the wrong button, or perhaps they misunderstood something, or perhaps something else. In other words: an individual just made a wrong judgement call. It happens.

                                                                                  1. 2

                                                                                    Yeah I meant an non-automated one. Like saying it was a false positive.

                                                                                    Maybe we wont have one like last time with the rules limit since that changes affected all ad-blockers. But I think we should be careful when people cry wolf without even waiting to see if it was deliberate on google’s side.

                                                                                    Last time there was a shit load of “google is going to ban ad-blockers” articles and it was more nuanced than that. But I don’t know if Google increased the rules limit or whatever as a PR move or in good faith.

                                                                                    1. 14

                                                                                      The reason why people love grabbing pitchforks about this is that Google never gives a “real” explanation, or any acknowledgement whatsoever that an actual human has even seen the issue. The only exception being that sometimes a tweet or social media post about a specific case gets enough attention for an actual Google employee with some authority to notice it.

                                                                                      1. 6

                                                                                        Yeah I meant an non-automated one. Like saying it was a false positive.

                                                                                        How/where can you get this non-automated response?

                                                                                        Maybe you can configure your mail server to require “I am not a robot” puzzle solving so you are sure an actual human at Google sent the mail?!

                                                                                        1. 0

                                                                                          Maybe you can configure your mail server to require “I am not a robot” puzzle solving so you are sure an actual human at Google sent the mail?!

                                                                                          Are you mocking me? That’s not how email servers work.

                                                                                          How/where can you get this non-automated response?

                                                                                          A bug report maybe? I have no idea if it’s even possible to contact a human at Google but I would hope there’s a way to appeal when your extension is rejected.

                                                                                          1. 16

                                                                                            That’s the point: There’s no way to contact Google except by raising a huge fuss on social media, or happening to know someone on the inside. That’s the way they’ve set it up.

                                                                                            There’s no such thing as a second chance for a company that won’t talk to you.

                                                                                            ETA: I just rechecked the thread after posting this, and 30 minutes ago, there was an update: Google has approved the extension because someone at Google saw the fuss on social media. See what I mean?

                                                                                            1. 4

                                                                                              Are you mocking me? That’s not how email servers work.

                                                                                              No, not you, but Google. It was my cynical take on this Kafkaesque situation.

                                                                                    1. 8

                                                                                      I don’t like DRM, but I don’t like this extremely dramatic doomsayer tone about DRM either. We’ve had DRM in general purpose computers for what feels like ages now, and nothing truly apocalyptic has happened. General purpose computing still exists. You still run free software. Millions of people still get their movies from The Pirate Bay. YouTube/Twitch/etc have not even considered using DRM to force people to watch ads together with the content. (They’re not even fighting youtube_dl really.) No one has tried to use EME for non-video content.

                                                                                      [Firefox adding EME support] did absolutely nothing to stop them from being steamrolled by Chrome’s growing popularity

                                                                                      uhh, how can we know that? We don’t exactly have an alternate reality where Firefox said no. We don’t know how many users would’ve quit Firefox specifically because it didn’t play Netflix.

                                                                                      1. 9

                                                                                        Adding EME to Firefox made more content publishers choose to enable EME since all (major) browsers supported it. Had Firefox not supported it, that decision would not have been so easy…

                                                                                        1. 8

                                                                                          NetFlix would not have backed down. They couldn’t, because they were competing for studio contracts with systems that relied on native apps (like iTunes, and their own offering on Android and iOS), systems that relied on plug-ins (like their browser-based player used to), and systems that relied on dedicated hardware (Blu-Ray, cable TV packages, and PlayStation). They could not possibly negotiate for a DRM-free contract when all of their competitors had DRM.

                                                                                          1. 3

                                                                                            My comment was specifically not about Netflix, but about smaller players… specifically tax payer paid for public TV in a least one country in Europe where that’s a thing. There was a brief and happy time between Flash, later Silverlight and EME where you could just point your browser to the site and watch the videos, even live TV! No plugin, no DRM. Of course as soon as EME became available, it was enabled. Hello infinite spinner not loading the video! :)

                                                                                          2. 5

                                                                                            And the pressure on Firefox would not have been there, had the W3C not betrayed web users.

                                                                                            1. 5

                                                                                              True. probably, maybe. The W3C is corrupt. See e.g. https://ar.al/notes/we-didnt-lose-control-it-was-stolen/, or the story about EFF’s withdrawal from W3C at https://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership.

                                                                                        1. 5

                                                                                          This article hits many of my concerns, my biggest being that half-jobs that become popular/used end up becoming permanent. Part of the push to DoH is “Getting people to agree to encrypted DNS is hard!” but it ignores the very real possibility that an incomplete/partial DoH implementation and strategy can very well end up the same way (more so because it eschews consensus in favor of trying to force it down our throats).

                                                                                          I run my own DNS server on site for caching and local in house dynamic DNS - if Mozilla proceeds with their experiment, I’ll probably figure it out after wondering why everything is slower for resolution for external sites and not resolving for internal names.

                                                                                          1. 2

                                                                                            Following the write up at the Cambridge, I’ve added a zone file that points to 127.0.0.1 for A and ::1 for AAAA solely for internal clients that effectively blocks use-application-dns.net.

                                                                                            If you use BIND, you can use this tutorial. Be mindful of the UTF16 garbage that comes with copy/paste of the type master; line.

                                                                                            I’d like to support DNS over TLS and DNS over HTTPS using Cambridge’s doh101 server. But I don’t have the time atm and Firefox’s chicanery doesn’t help.

                                                                                            1. 2

                                                                                              Actually according to the documentation[0] I don’t think routing use-application-dns.net to localhost will work as intended.

                                                                                              The way I read it, you need to define use-application-dns.net but return NO A/AAA records.

                                                                                              0: https://support.mozilla.org/en-US/kb/canary-domain-use-application-dnsnet

                                                                                              1. 3

                                                                                                For Unbound:

                                                                                                # disable DoH
                                                                                                # See: https://use-application-dns.net/
                                                                                                # See: https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https
                                                                                                local-zone: use-application-dns.net always_nxdomain
                                                                                                
                                                                                                1. 2

                                                                                                  The procedure I outlined above results in a SERVFAIL from both gateway and internal clients:

                                                                                                  (cpython37) InvincibleReason:~$ nslookup use-application-dns.net
                                                                                                  Server:		192.168.1.1
                                                                                                  Address:	192.168.1.1#53
                                                                                                  
                                                                                                  ** server can't find use-application-dns.net: SERVFAIL
                                                                                                  
                                                                                                  (cpython37) InvincibleReason:~$
                                                                                                  

                                                                                                  Perhaps it’s only working by accident. I’m not going to stake my reputation on my knowledge of Bind, only the effective results to preserve my split horizon dns.

                                                                                                  1. 2

                                                                                                    YAY! glad it works!

                                                                                            1. 11

                                                                                              I really don’t want to spend my free time tracking down how the latest kernel pulls in additional functionality from systemd that promptly breaks stuff that hasn’t changed in a decade, or how needing an openssl update ends up in a cascading infinitely expanding vortex of doom that desperately wants to be the first software-defined black hole and demonstrates this by requiring all the packages on my system to be upgraded to versions that haven’t been tested with my application.

                                                                                              I find it impossible to continue reading after this. Nobody is forced to run Gentoo or Arch Linux on a production server, or whatever the hipster distribution of the day is. There are CentOS and Debian when some years of stability are required. More than any of the BSDs offer.

                                                                                              1. 3

                                                                                                Well, the rest also mentions apt-hell with debian and package upgrading.

                                                                                                Can you elaborate on the last sentence?

                                                                                                1. 10

                                                                                                  Well, the rest also mentions apt-hell with debian and package upgrading.

                                                                                                  I read that section now… it seems to imply you are forced to update Debian every year to the latest version otherwise you don’t get security updates. Does the author even know Debian? apt-hell? Details are missing. I’m sure you can get into all kinds of trouble when you fiddle with (non official) repositories and/or try to mix&match packages from different releases. To attempt this in production is kinda silly. Nobody does that, I hope :-P

                                                                                                  Can you elaborate on the last sentence?

                                                                                                  I’m not aware of any BSD offering 10 year (security) support for a released version, I’m sure OpenBSD does not, for good reason, mind you. It is not fair to claim updates need to be installed “all the time” as the poster implies and will result in destroying your system or ending up in “apt-hell”. Also, I’m sure BSD updates can go wrong occasionally as well!

                                                                                                  I’m happy the author is not maintaining my servers on whatever OS…

                                                                                                  1. 18

                                                                                                    I read that section now… it seems to imply you are forced to update Debian every year to the latest version otherwise you don’t get security updates.

                                                                                                    We have many thousands of Debian hosts, and the cadence of reimaging older ones as they EOL is painful but IMO, necessary. We just about wrapped up getting rid of Squeeze, some Wheezy hosts still run some critical shit. Jessie’s EOL is coming soon and that one is going to hurt and require all hands on deck.

                                                                                                    Maybe CVEs still get patched on Wheezy, but I think the pain of upgrading will come sooner or later (if not for security updates, then for performance, stability, features, etc.).

                                                                                                    As an ops team it’s better to tackle upgrades head on, than to one day realize how fucked you are, and you’re forced to upgrade but you’ve never had practice at it, and then you’re supremely fucked.

                                                                                                    And, yes, every time I discover that systemd is doing a new weird thing, like overwriting pam/limit.d with it’s own notion of limits, I get a bit of acid reflux, but it’s par for the course now, apparently.

                                                                                                    1. 3

                                                                                                      This is a great comment! Thanks for a real-world story about Debian ops!

                                                                                                      1. 5

                                                                                                        I have more stories if you’re interested.

                                                                                                        1. 3

                                                                                                          yes please. I think it’s extremely interesting to compare with other folks’ experiences.

                                                                                                          1. 7

                                                                                                            So, here’s one that I’m primarily guilty for.

                                                                                                            I wasn’t used to working at a Debian shop, and the existing tooling when I joined was written as Debian packages. That means that to deploy anything (a Go binary e.g. Prometheus, a Python Flask REST server), you’d need to write a Debian package for it, with all the goodness of pbuilder, debhelper, etc.

                                                                                                            Now, I didn’t like that - and, I won’t pretend that I was instrumental in getting rid of it, but I preferred to deploy things quicker, without needing to learn the ins and outs of Debian packaging. In fact, the worst manifestation of my hubris is in an open source project, where I actually prefer to create an RPM, and then use alien to convert it to a deb, than to natively package a .deb file (https://github.com/sevagh/goat/blob/master/Dockerfile.build#L27) - that’s how much I’ve maneuvered to avoid learning Debian packaging.

                                                                                                            After writing lots of Ansible deployment scripts for code, binaries, Python Flask apps with virtualenvs, etc., I’ve learned the doomsday warnings of the Debian packaging diehards.

                                                                                                            1. dpkg -S lets you find out what files belong to a package. Without that, there’s a lot of “hey, who does /etc/stupidshit.yml belong to?” all the time. The “fix” of putting {% managed by ansible %} on top is a start, I guess.
                                                                                                            2. Debian packages clean up after themselves. You can’t undo an Ansible playbook, you need to write an inverse Playbook. Doing apt-get remove horrendous-diarrhea-thing will remove all of the diarrhea.
                                                                                                            3. Doing upgrades is much easier. I’ve needed to write lots of duplicated Ansible code to do things like stat: /path/to/binary, command: /path/to/binary --version, register: binary_version, get_url: url/to/new/binary when: {{ binary_version }} < {{ desired_version}}. With a Debian package, you just fucking install it and it does the right thing.

                                                                                                            The best of both worlds is to write most packages as Debian packages, and then use Ansible with the apt: module to do upgrades, etc. I think I did more harm than good by going too far down the Ansible path.

                                                                                                            1. 1

                                                                                                              Yeah, this is exactly my experience. Creating Debian packages, correctly, is very complicated. Making RPM packages is quite easy as there’s extensive documentation on packaging software written in various languages. From PHP to Go. On Debian there is basically no documentation, except for packaging software written in C that is not more complicated than hello_world.c. And there are 20 ways of doing something, I still don’t know what the “right” way is to build packages that works similar to e.g. mock on CentOS/Fedora. Aptly seems to work somewhat, but I didn’t manage to get it working on Buster yet… and of course it still doesn’t do “scratch” builds on a clean “mock” environment. All “solutions” for Debian I found so far are extremely complicated, no idea where to start…

                                                                                                              1. 1

                                                                                                                FreeBSD’s ports system creates packages via pkg(8) which has a really simple format. I have lots many months of my life maintaining debian packages and pkg is in most ways superior to .deb. My path to being a freebsd committer was submitting new and updated packages, the acceptance rate and help in sorting out my contributions was so much more pleasurable than the torturous process that I underwent for debian packages. Obviously everbody’s experience is different, and I’m sure there are those who have been burned by *BSD ports zealots too.

                                                                                                                Anyway it’s great to see other people who also feel that 50% of sysadmin work could be alleviated by better use of packages & containers. If you’re interested in pkg, https://hackmd.io/@dch/HkwIhv6x7 is notes from a talk I gave a while back.

                                                                                                2. 1

                                                                                                  Ive been using the same apps on Ubuntu for years. They occasionally do dumb things with the interface, package manager, etc. Not much to manage, though. Mostly seemless just using icons, search, and the package manager.

                                                                                                1. 9

                                                                                                  Would love to see some more infos about build quality, battery life, touchpad performance, how many nits can the display deliver and so on.

                                                                                                  A friend bought a 2015 model (I believe) and he was not happy with the overall build quality. But I had the chance to have the newer InfinityBook model in my hands for a short moment and I have to say that it felt much better (build quality-wise).

                                                                                                  Glad to see more Linux-first devices. Tuxedo seems to be a smaller German manufacturer. Is this CLEVO hardware? Do they support fwupd?

                                                                                                  1. 3
                                                                                                    1. 2

                                                                                                      Thanks for the feedback. Since I got quite a few hardware-detail related questions, I will write a follow-up blogpost covering those. I’ve also approached the vendor to see whether there are more details that can be covered.

                                                                                                      1. 1

                                                                                                        Definitely interested in a follow up on this.

                                                                                                      2. 2

                                                                                                        A colleague of mine had a Tuxedo notebook but this thing looked rather Chinese than German. (I don’t know what version it was, though.)