Threads for jornane

  1. 2

    Has anyone played with ODoH? Relying on Cloudflare for the recursive resolving but being able to be anonymous to it sounds like a pretty great compromise between performance and anonymity. I see there’s a server up at https://github.com/cloudflare/odoh-server-go

    It’s on my todo list to investigate but I’m wondering if any of you nerds beat me to it.

    1. 2

      From what I understand, ODoH is basically DoH behind a TCP proxy.

      The problem with ODoH is that, from the user’s perspective, it is indistinguishable from DoH. So I don’t really see the difference between CloudFlare promising “we don’t log your data, honest!” or “we’re running a TCP proxy in front, honest!”; in both cases you trust them with your data and you can’t verify that they’re being honest.

      Running that TCP proxy yourself proves that there is one, but it only provides privacy if many people use it. If you only run it on localhost, you’re the only user, and it’s no different from contacting the CloudFlare DoH service directly.

      1. 1

        It’s definitely more than a proxy in that the layered encryption means both the proxy and Cloudflare would have to collude to expose my data. Neither party alone has all the information. That’s a neat way of handling this problem.

        And yeah running my own proxy would have that problem, that’s worth pointing out. I’d probably run it in the cloud though, and not locally. And I’d probably get others to use it.

    1. 44

      Tabs have the problem that you can’t determine the width of a line, which makes auto-formatted code look weird when viewed with a different width. And configuring them everywhere (editor, terminal emulator, various web services) to display as a reasonable number of spaces is tedious and often impossible.

      1. 24

        I agree with you, tabs introduce issues up and down the pipeline. On GitHub alone:

        • diffing
        • are settings are per person or per repo
        • yaml or python, where whitespace is significant
        • what if you want things to line up, like comments, or a series of statements?
        • combinations of the above interacting

        If you’re turning this into, say, epub or pdf, would you expect readers and viewer apps to be able to adjust this?

        I fixed up some old code this week, in a book; tabs were mostly 8 spaces, but, well, varied chapter by chapter. Instead of leaving ambiguity, mystery, puzzling, and headaches for future editors and readers to trip over, I made them spaces instead.

        1. 8

          I don’t get the point about yaml and python. You indent with tabs, one tab per level, that’s it. What problems do you see?

          1. 4

            In the Python REPL, tabs look ugly. The first one is 4 columns (because 4 columns are taken up by the “>>> “ prompt), the rest are 8 columns. So you end up with this:

            >>> for n in range(20):
            ...     if n % 2 == 1:
            ...             print(n*n)
            
            1. 9

              When I’m in the Python REPL, I only ever use a single space. Saves so much mashing the spacebar and maintaining readability is never an issue as I’m likely just doing some debugging:

              >>> for n in range(20):
              ...  if n % 2 == 1:
              ...   print(n*n)
              
              1. 3

                True, but this shows that tabs don’t work well everywhere. Spaces do.

                1. 1

                  Unless you use a proportional font.

                  1. 2

                    Even with a proportional font, all spaces have the same width.

            2. 3
              def a():
              	x
                      y
              

              The two lines look the same, but they’re not to the python interpreter, even though you could use just spaces or just tabs.

              1. 17

                Don’t mix tabs and spaces for indentation, especially not for a language where indentation matters. Your code snippet does not work in Python 3:

                TabError: inconsistent use of tabs and spaces in indentation

                1. 1

                  That was my point.

                  1. 3

                    Your point is don’t mix tabs and spaces? Nobody proposed that. The comment you responded to literally states:

                    You indent with tabs, one tab per level, that’s it.

                    Or is your point don’t use tabs because if you mix in spaces it doesn’t work?
                    Then my answer is don’t use spaces, because if you mix in tabs it doesn’t work.

            3. 8

              what if you want things to line up, like comments, or a series of statements?

              https://nickgravgaard.com/elastic-tabstops/

              1. 2

                I appreciate that this is still surfaced, and absolutely adore it. I’d have been swayed by “tabs for indenting, spaces for alignment, for the sake of accessibility” if not for Lisp, which will typically includes indents of a regular tab-width off of an existing (arbitrary) alignment, such that indentation levels don’t consistently align with multiples of any regular tab-stops (eg. the spaces preceeding indention level 3 might vary from block to block depending on the context, and could even be at an odd offset). Elastic tab-stops seem like the only approach that could cator to this quirk, though I haven’t tried the demo with the context in mind.

                I also lament the lack of traction in implementations for Emacs, though it’s heartwarming to see those implementations that are featured. Widespread editor support may be the least of the hurdles to adoption, which feels like a third-party candidate in a two-party system. Does .editorconfig include elastics as an option? I’m not sure exactly how much work adding that option would entail, but that might be a great way to contribute to the preservation of this idea without the skills necessary to actually impliment support in an editor.

              2. 9

                what if you want things to line up

                Easy. Don’t.

                If you want to talk about diffing issues, then look at the diffs around half the Haskell community as a new value being longer requires a whole block to shift and either a bunch of manual manipulations or having to run a tool to parse and set your code just because you felt like things had to line up.

                1. 3

                  what if you want things to line up, like comments, or a series of statements?

                  Then you put spaces after your tabs. https://intellindent.info/seriously/

                2. 2

                  I use tabs and autoformatters. I don’t think my code looks weird with any width between 2 and 8. What kind of weirdness do you refer to? About configuring, most developers have a dotfiles repo and manicure their setup there, why would setting a tabwidth there be more tedious than what most people do already anyway?

                  1. 5

                    Let’s say that you have the maximum line length set to 24 columns (just to make the example clear). You write code like this:

                    if True:
                        print("abcdefg")
                        if False:
                            print("xyz")
                    

                    With the tab width set to 4 columns, your autoformatter will leave all lines alone. However, if someone has the tab width set to 8, the fourth line will overreach the limit. If they’re also using the same formatter, it will break up the fourth line. Then you’ll wonder why it’s broken up, even though it’s the same length as the second line, which wasn’t broken up. And your formatter might join it up again, which will create endless conflicts.

                    1. 4

                      Optimal line reading length is around 60 chars per line, not 60 characters including all leading whitespace. Setting bounds based on character from column 0 is arbitrary, and the only goal should be not too many characters per line starting at the first non-whitespace character (and even this is within reason because let’s be real, long strings like URLs never fit).

                      1. 3

                        Setting bounds based on character from column 0 is arbitrary

                        Not if you print the code in mediums of limited width. A4 paper, PDF, and web pages viewed from a phone come to mind. For many of those a hard limit of 80 columns from the start is a pretty good start.

                        1. 1

                          That is a fairer point as I was referring to looking at code in an editor–reason being that we’ve been discussing mediums where users can easily adjust the tab-width which is more on topic than static mediums. Web pages are the weird one where it should technically be just as easy to configure the width, but browsers have made it obnoxious or impossible to set our preferred width instead of 8 (I commented about it in the Prettier thread as people seem so riled up about it looking bad on GitHub instead of seeing the bigger picture that GitHub isn’t where all source code lives).

                          1. 5

                            Note that my favourite editor is the left half of my 13-inch laptop screen…

                      2. 1

                        I never really understood the need for a maximum length when writing software. Sure it makes sense to consider maximum line length when writing for a book or a PDF, but then it’s not about programming but about typesetting; you also don’t care about the programming font unless you’re writing to publish.

                        If you really want to set a maximum line length, I’d recommend to have a maximum line length excluding the indentation, so that when you have to indent a block deeper or shallower, you don’t need to recalculate where the code breaks.

                        But really don’t use a formatter to force both upper and lower limits to line lengths; sometimes it makes sense to use long lines and sometimes it makes sense to use short lines.

                        1. 5

                          Maximum line length makes sense because code is read more often than it’s written. In terms of readability, you’re probably right about maximum line length excluding indentation. But on the other hand, one of the benefits of maximum line length is being able to put multiple text buffers side-by-side on a normal monitor. Perhaps the very smart thing would be a maximum of 60 chars, excluding indentation, with a max of 110 chars including indentation. Of course, you have to treat tabs as a fixed, known width to do that.

                          1. 3

                            I never really understood the need for a maximum length when writing software.

                            There are a bunch of editing tasks for which I want to view 2 or 3 different pieces of code side by side. I can only fit so many editors side by side at a font size that’s still readable.

                            • looking at caller and callee
                            • 3 way diff views
                            • old and new versions of the same code
                            1. 3

                              Personally, I hate manually breaking up lines when they get too long to read, so that’s what an autoformatter is for. Obviously the maximum readable length differs, but to do it automatically, one has to pick some arbitrary limit.

                              1. 1

                                Sure, but there’s a difference between breaking lines when they get too long, and putting them together again when they are too short.

                                When I use black to format Python code, it always annoys me that I cannot make lines shorter than the hard limit. I don’t really care that I can’t make them longer than some arbitrary limit. Sure, the limit is configurable, but it’s per-file, not per-line.

                                If the problem you have is “where should I split this 120-character one-liner that’s indented with 10 tabs”, then tabs aren’t your problem.

                      1. 0

                        How do you make SVG graphics like that?

                        1. 2

                          Not certain what they used, but I think mermaid and plantuml can both render sequence diagrams to svg. Looks like they used aasvg, which seems like more work unless you’re really a fan of doing ascii art: https://github.com/ietf-wg-ohai/oblivious-http/blob/69b433f73ece9306066b751ef4a445a56fd2517f/draft-ietf-ohai-ohttp.md?plain=1#L153

                        1. 5

                          The author frames this as a problem with WordPress (the article concludes with WordPress is still vulnerable with the latest version, 6.0. )

                          However, I think that this is a problem with Certificate Transparency;

                          A very common situation is that a user will first configure a web host with a certificate, often using automated tools like Certbot, and then start the installation of a web application like WordPress. The certificate will show up shortly after in one of the public Certificate Transparency logs; this process takes only a few minutes.

                          That makes sense; if you don’t configure the web host with a certificate, you can’t connect to it. Of course you could use HTTP for the installation, but then you’d open a whole different can of security problems. Personally I set up hosts with a self-signed certificate first, and then opt for a publicly trusted certificate when I’m done, but I can see how this could be a bit too advanced for most hobbyists.

                          Let’s Encrypt gives you free certificates, but if you use it, it will announce your hostname to the world. Hey, here is a new web server, maybe you can compromise it before the admin finishes setting it up. The solution would be to make Certificate Transparency more limited; it could require a token in DNS before you can query it. Putting the responsibility for this securty problem on WordPress is the world upside down.

                          But in the meantime, the only way for an admin to be safe is to use wildcard certificates. But that requires DNS challenges which are harder to set up than HTTP challenges, so we will have to live with this problem for a while.

                          1. 39

                            I don’t buy this at all. For one thing, it’s really very easy to solve this within the parameters I think you’re implying: just put WP on an unpredictable path, or gate it behind HTTP basic auth.

                            But more generally I think any security model based on hosting a public server that people can’t find is just broken. Domain names can be discovered in many ways, of which certificate transparency logs are just one newer and particularly easy example. It’s a race against time at best, and an unwinnable race against time against a determined attacker.

                            1. 2

                              I don’t buy this at all. For one thing, it’s really very easy to solve this within the parameters I think you’re implying: just put WP on an unpredictable path, or gate it behind HTTP basic auth.

                              I don’t see how any if this works without WordPress making installs more complicated. Some setup either happens before you upload files, or you get cloud WordPress involved.

                              1. 11

                                This isn’t complicated. If you install it from source, there should be no admin password until you set it in the config (along with the DB connection info and a bunch of other stuff you already have to set). If a hosting company installs it for you, they should assign a random password and give it to you out of band.

                                1. 3

                                  IIRC, the traditional wordpress install process involves you configuring the DB connection info in the installer, and the installer writing the config for you - so in theory, you don’t currently need to make any manual config edits.

                                  1. 1

                                    Presumably they don’t want you to put the admin password in a file … But the point about connection information for the database is completely valid.

                                    1. 1

                                      Right, I was thinking it’s just an initial password you’d have to change during the web setup process when you set up the real admin account. (Another improvement I’d make is forcing an admin user to be created rather than having an “admin” account exist at all.)

                                  2. 4

                                    making installs more complicated

                                    Even with a key, opening a locked door is harder than opening an unlocked one; that’s not a great argument for leaving your doors unlocked.

                                    Wordpress could require you to set an “initial setup password” at install time, or require that initial setup be done on the loopback address (you can ssh-forward a port to connect), or any number of other methods.

                                    1. 4

                                      Wordpress could require you to set an “initial setup password” at install time, or require that initial setup be done on the loopback address (you can ssh-forward a port to connect), or any number of other methods.

                                      And yet, all of these things are significant barriers to install, something WordPress has optimized over it’s 20ish year history. Should they do something? Yes. Are they likely to do something that complicates the install process too much more than they already have to? Pretty unlikely.

                                      So, let’s see in order of “these are mitigations I can think of off the top of my head* with difficulty between 1 (easy) and 5 (easy) for J. Basic User:

                                      1. Have the user setup http basic auth before install: 5
                                      2. Require the user edit a file before uploading it to the website that has an install password: 3
                                      3. Move the install to a random directory for setup, and then move it back / symlink it / whatever: 4
                                      4. Require registering the domain with WordPress.com and somehow validating it when you login via WordPress.com: 2
                                      5. Recommend people use web hosts that have 1-click WordPress setups, and make them deal with it: 1
                                      6. Do nothing: 1

                                      Not sure of the install statistics, but I’d guess that 1-click setups are probably fairly popular, so the “experts” that are offering them should be able to take on most of the burden. Then, maybe the edit a file before uploading is good enough for the bulk of the other users who are going so far as to set it up themselves, completely… Seems reasonable to me.

                                2. 17

                                  The actual problem is:

                                  These installers usually come without any authentication. Whoever accesses a web host with a WordPress installer can finish the installation.

                                  There was a discussion here last week that went into “secure by default.” Security is improved by not having to turn security features on in the first place. Usually that’s because users may forget or put off turning them on. In this case it’s because of a race condition.

                                  A program should not ship with fixed, well-known credentials to administer it. This has caused so many problems in the past, such as millions of easily-hackable MongoDB installations. Raspberry Pi just updated their installers to stop creating a default “pi” admin account.

                                  1. 2

                                    I think the problem here is that you unzip the downloaded release onto your shared hosting account, but then you need “somehow” to supply it with unique credentials just for you. Since it’s all about ease of use, editing a file remotely might be too much to ask (especially if it requires you to get the syntax right), this is not really an option.

                                    It doesn’t really have fixed credentials BTW; it just presents you with an installation wizard which sets up the admin user for you with a password of your own choosing.

                                    1. 2

                                      If you can unzip the files remote editing seems like a pretty low bar. Maybe they can make a “config generator” that produces a second zip that sets a password (that is also shown on the website). If you want to be really fancy just inject a random password on the fly into the main zip. (Computationally easy but removes the ability to use a basic HTTP CDN)

                                      But in my mind just unzipping an archive into your webserver shouldn’t allow remote code execution. That seems like the wrong tradeoff to me. The unzipped files should be “unarmed” by default.

                                      1. 1

                                        I mean, you could have a setting “mode = uninitialized” in the zip, then on startup, WP sees that, and dumps a password into startup-password.txt on the machine in a private location accessible to SFTP but not WWW, and then WP deletes that file the first time someone logs in. There are plenty of ways to do it. You just have to care about being the largest source of exploits on the web today and not shrug it off.

                                    2. 5

                                      How is it ever acceptable for it to start a publically connectible instance without a password? It should just create a password on set-up and print that to console or to a file, the hosters can then take that and for example email it to the customer or display it in their management console or whatever.

                                      1. 3

                                        oh it’s definitely a problem with wordpress, or more specifically:

                                        […] that an installation will usually be performed quickly and thus it should pose little risk […]

                                        Here is the mistake. Don’t put an open system on the public internet and expect it to be safe.

                                        Now how do you make it secure without making the setup complicated is the question, as stated many times below.

                                        1. 2

                                          The solution would be to make Certificate Transparency more limited; it could require a token in DNS before you can query it.

                                          Offering a way to turn off CT is like offering a backdoor for encryption. No matter how much you claim that only the “good guys” will be able to use it, sooner or later it is inevitable that the “bad guys” will be able to use it to.

                                          Meanwhile this is Wordpress’ problem. The security of the installer is entirely dependent on nobody else knowing there’s a Wordpress installer running on a given URL; that’s simply not workable as a security model, and honestly hasn’t been for years and years. CT is just the thing that’s making you aware of it, but if you think the “bad guys” don’t or won’t ever have other ways to scan newly-registered or newly-provisioned stuff, well, you’re just flat wrong. People are going to figure out ways to do this, and every time a vector opens up for it Wordpress will be broken.

                                          So this absolutely is Wordpress’ problem (and any other software that has the same “security” model). There are tons of ways to fix this — injecting a randomly-generated password into the download and making the user copy it from the page wouldn’t even be that hard! — and it’s time for WP to adopt one of them.

                                          1. 1

                                            The solution would be to make Certificate Transparency more limited; it could require a token in DNS before you can query it.

                                            So basically I’m translating this as:

                                            “it would require the DNS admin to do operations before the unsecure wordpress can be exploited”.

                                            How would this solve the real problem of wordpress being unsecure by default?

                                            Putting the responsibility for this securty problem on WordPress is the world upside down.

                                            It’s just 2 separate things:

                                            • discoverability of an unsecure system (cert transparency is just one way to find them)
                                            • leaving a system vulnerable

                                            Fixing one won’t fix the other.

                                            1. 2

                                              discoverability of an unsecure system (cert transparency is just one way to find them)

                                              It’s a very effective one, I would argue it’s even the most effective one by far. Other methods (continuous scanning) take a lot more effort and are a lot harder to do covertly. Especially if you want to hit the time window between uploading and setting the admin password, which will typically be well under an hour.

                                              So basically I’m translating this as:

                                              “it would require the DNS admin to do operations before the unsecure wordpress can be exploited”.

                                              DNS admins do typically not allow zone transfers to anyone, try running dig AXFR lobste.rs @ns1.dnsimple.com, yet crt.sh shows me that there was an l.lobste.rs at some point. Why should I be allowed to query this, and why is it not possible to opt-out from that? So why the difference? Why do we still block zone transfers as a security measure (don’t show more data than you have to), yet we’re fine with bad actors subscribing to CT feeds telling them which hostnames are new and possibly an easier target.

                                              Setting a token in your TXT record, such as the sha512 hash of a secret string, and requiring to provide the secret string when querying CT logs for your domain, would improve security of CT a lot.

                                              But making CT opt-in might be even better, as CT does not really offer much unless you’re one of the big ones. Most reports I’ve seen from people monitoring CT for their domain, is figuring out that their cloud provider switched CAs, such as recently when Cloudflare started signing using two different CAs (one wonders why people who use CT don’t use CAA records). But I have yet to see a small site use CT to find out about a rogue CA (not just bad practices, actual exploitable security problems), and be able to do something about it.

                                              1. 6

                                                Making CT opt-in… doesn’t really work. The point, for better or worse, of CT as a security model is that all CAs are observable and their mistakes can be picked up in third-party audits. If there’s any mechanism for secret certificates to be issued, CT instantly loses most of its value.

                                                Also, CT is designed to provide an efficient means for third parties to check that the CT log services themselves are serving the same log to everyone. But that requires CT log services to serve the same log to everyone. If we accept the existence of entries that the server is not prepared to reveal to everyone, we’re also accepting a reduction in the level of confidence we can have in CT itself.

                                                Incidentally, most domains are discovered by passive DNS monitors at some point, and this data is generally available for sale (to security companies, but you can count on bad actors finding a way). How quickly that happens will depend on various factors, but it can be comparable to CT—so while it’s obvious that CT makes this kind of attack a lot easier, I don’t think it was ever safe. CT’s contribution might even be a positive one, if it ultimately leads to more effort invested in making the secure way easy.

                                                In any case, having lost the obscurity in a security-through-obscurity setup, I think it’s probably wrong to focus on trying to get the obscurity back.

                                          1. 16

                                            You can configure your web browser to not send User-Agent HTTP header at all – it is not mandatory.

                                            Removing my user agent also removed my ability to visit Cloudflare-encumbered websites, so I’d say it IS mandatory on large parts of the internet.

                                            1. 10

                                              It also causes lobste.rs to return a 500 error; and Netflix to throw you to a help article about outdated browsers, not letting you even try to use the player.

                                            1. 33

                                              A title describing the same problem from a different angle would be “The mess we’ve gotten ourselves into with single-page-applications

                                              1. 6

                                                How about “The proliferation of JavaScript and our failure to prevent servers from acquiring vast stockpiles of such code

                                                1. 4

                                                  Can you elaborate? Classic SPAs don’t have this problem because all their functions are “client colored” (to borrow the terminology of the post).

                                                  1. 7

                                                    I guess the answer is that Classic SPAs are good until you need some SEO which is probably very common. Hence SSR. Although technically speaking SPA per se don’t need SSR (maybe for performance but shouldn’t be an issue if things were developped correctly by default I’d say).

                                                    1. 15

                                                      I was thinking the same thing. The title could easily be “The mess spawned by organizing the web economy around a search monopoly”.

                                                      1. 9

                                                        IMO, this is the wrong intuition. Categorically, pages that need to be SEO-opitimized are those that are informational. You don’t need SEO for a desktop app, nor would you need that a web app because a web app is the same thing but distributed through the browser (but sandboxed, not making users require a download executables, and on a highly portable platform available on almost every OS and architecture). These two concepts are not the same thing despite both being delivered through the browser; you shouldn’t use a SPAs tech stack for a basic page because information category pages don’t require the same shared state management and usage of device feature APIs that an application might. I can use Wikipedia from a TUI browser because it’s 95% information. It was the exact same issue in the Flash days of not using the right tech and society has permanently lost some content from its internet archive.

                                                        So it’s not “when you need SEO”, but SEO should be a requirement from the get-go in helping you choose a static site or dynamic, multipage application where the server always did the rendering.

                                                        The problem is the tooling. The NPM community instead of having an intuition about the right tool for the job and stating “do not use this tool for your static content”, we have tools that try to solve everything and hide the mountains of complexity that should have scared devs away from the complex solution into the simple one. It should be hard to make a complex pipeline like that of server-side rendering for a SPA. And that easy tooling is riddled with bugs, megabytes of node_modules, and might invite you to start involving more complexity with tech such as ‘cloud workers’, but people don’t find out until they are way too deep in the Kool-Aid. Many don’t seem to see this issue because influencers are pushing this stuff to get GitHub stars and have, ironically, gotten all of the top ranks when searching for a solution (or people were asking the wrong questions without knowing).

                                                      2. 3

                                                        Not the poster you’re responding to but it might be because SSR is a fairly natural leap from SPA-style apps. They might also be implying that it’s my fault, which would be nice, but unfortunately isn’t the case.

                                                    1. 10

                                                      How many connections to google it does while compiling/booting?

                                                      1. 11

                                                        It’s a shame Google can’t run open-source projects. Fuchsia looks like one of the more interesting operating systems but as long as Google has complete control over what goes in and no open governance it’s not something I’d be interested in contributing to.

                                                        1. 11

                                                          To be fair to Google - they’re doing work in the open that other companies would do privately. While they say they welcome contributions they’re not (AFAIK) pretending that the governance is anything it’s not. On their governance page, “Google steers the direction of Fuchsia and makes platform decisions related to Fuchsia” – honest if not the Platonic ideal of FOSS governance.

                                                          To put it another way - they’re not aiming for something like the Linux kernel. They know how to run that kind of project, I’m sure, but the trade-off would be to (potentially) sacrifice their product roadmap for a more egalitarian governance.

                                                          Given that they seem to have some product goals in mind, it’s not surprising or wrong for them to take the approach they’re taking so long as they’re honest about that. At a later date they may decide the goals for the project require a more inclusive model.

                                                          If the road to Hell is paved with good intentions, the road to disappointment is likely paved with the expectation that single-vendor initiatives like this will be structured altruistically.

                                                          1. 6

                                                            The governance model is pretty similar to Rust’s in terms of transparency: https://fuchsia.dev/fuchsia-src/contribute/governance/rfcs

                                                            Imperfect in that curreny almost all development is done by Google employees, but that’s a known bug. But (to evolve the animal metaphors) there’s a chicken and egg issue here. Without significant external contributions it’s hard for external contributors to have a significant impact on major technical decisions.

                                                            This same issue exists for other OSes like Debian, FreeBSD, etc - it’s the major contributors that have the biggest decision making impact. Fuchsia has the disadvantage that it’s been bootstrapped by a company so most of the contributors, initially, work for a single company.

                                                            I’m optimistic that over time the diversity of contributors will improve to match that of other projects.

                                                            1. 4

                                                              A real shame indeed. Its design decisions seem very interesting.

                                                              1. 1

                                                                yeah I’d bet the moment they have what they wanted it’ll be closed down, because this is ultimately the everything-owned without GPL -OS for google

                                                              2. 7

                                                                Probably zero. Or if you’re using 8.8.8.8 for your DNS probably less than Windows or macOS.

                                                                1. 5

                                                                  They all start like this, but at the end it will be another chrome.

                                                                  1. 5

                                                                    Co-developed with companies as diverse as Opera, Brave, Microsoft and Igalia, as well as many independent individuals? As a Fuchsia developer that’s a future I aspire to.

                                                                    1. 13

                                                                      Chrome, which refused to accept FreeBSD patches with a community willing to support them because of the maintenance burden relative to market share, yet, accepted Fuchsia patches passing the same maintenance burden on to the rest of the contributors, in spite of an even smaller market share? If I were an antitrust regulator looking at Google, their management of the Chromium project is one of the first places that I’d look. Good luck building an Android competitor if you’re not Google: you need Google to accept your patches upstream to be able to support the dominant web browser. Not, in my mind, a great example of Google running an inclusive open source project.

                                                                      1. 6

                                                                        It’s not just about whose labor goes into the project, but about who decides the project’s roadmap. That said, maybe it’s about time to get the capability-security community interested in forking Fuchsia for our own needs.

                                                                        1. 3

                                                                          You should be more worried about the “goma is required to build Chrome in under 5 hours” future, in my opinion.

                                                                          1. 0

                                                                            Keep aspiring on google salary. It would be good to disclose conflict of interest btw.

                                                                            1. 11

                                                                              I mentioned that I’m a Fuchsia developer. I’m not sure what my conflict of interest here is. I’m interested in promoting user freedom by working on open source software across the stack and have managed to find people to pay me to do that some of the time, though generally less than I would have made had I focused on monetary reward rather than the impact of my work.

                                                                      2. 5

                                                                        The website doesn’t have working CSS without allowing gstatic.com, so I’d guess at least one?

                                                                        1. 1

                                                                          /me clutches pearls

                                                                      1. 2

                                                                        This seemed very verbose - it could be a quarter of its current size with no loss of information.

                                                                        The big thing I took from this is OCSP stapling is a de-facto reduction in certificate lifetime. The TLS server is periodically (within hours or days) checking a CA for whether its certificate is valid, and conveying that in signed form to the client. In this model, the existence of a multi-year certificate is not much more than legacy compatibility for clients that don’t understand the stapled response. Once those clients are gone, it even opens the door to much longer certificate issuance, so private keys can be retained longer because there’s already an affirmative process to handle any breach.

                                                                        What’s strange is this is the outcome the author appears to want and lament that we don’t have, but his own description strongly suggests it’s a path we’re on.

                                                                        1. 1

                                                                          The private key can remain the same when renewing a certificate, I use acme.sh which seems to do this by default (it has the --always-force-new-domain-key flag to force a new key)

                                                                          What I think is interesting though, is that by using TLSA records (the author calls this DANE), you can pin the private key. Revocation of the key is then simply done through DNS, so it has the same expiry mechanism as anything else, and at the same time prevents other keys from being used for your domain. This would allow us to greatly simplify how the CA ecosystem works (which today is an opaque blob consisting of public CAs, CAB, CT and the pinky swear from CAs to honor CAA records)

                                                                          1. 2

                                                                            It’s a massive footgun though. Many pages went down because they restricted their keys incorrectly. Some had to be rescued by browser vendors ignoring their certificate pins. (https://www.smashingmagazine.com/be-afraid-of-public-key-pinning/ - I know it’s not exactly the same, but still a footgun)

                                                                            1. 2

                                                                              HPKP (HTTP Public Key Pinning) is indeed a footgun, because the mechanism to deliver the key (HTTPS) is the same mechanism it should protect. This means that as soon a key is pinned, you MUST be able to reproduce that key or the client will refuse to connect, and therefore refuse to update the pins; it’s a footgun because you can get into a deadlock.

                                                                              Using TLSA is very different; the keys are in DNS, so replacing them is easier, and clients will not end up in a deadlock situation when you roll keys too quick. As such, it’s not possible to shoot yourself in the foot the same way. You might still be able to pinch yourself in the foot when you forget to update your TLSA records when you rotate your key, but this will only bite you for the duration of your own TTL after you remedy the situation.

                                                                        1. 22

                                                                          That guy splicing fiber in a completely demolished area is a friggin CHAMPION.

                                                                          1. 16
                                                                          1. 3

                                                                            I do this so often that I have a hotkey that pastes the following header:

                                                                            #!/usr/bin/env zsh
                                                                            set -euo pipefail
                                                                            HERE="$(dirname "$(realpath -s $0)")"
                                                                            cd "$HERE"
                                                                            
                                                                            1. 4

                                                                              FWIW

                                                                              1. When running bin/oil, set -euo pipefail is on by default. And when running bin/osh, you can also do shopt -s oil:basic which includes that plus some more error handling.
                                                                              2. It also has $_this_dir because I have the HERE thing in almost every shell script as well.
                                                                              • So you can do $_this_dir/mytool or cd $_this_dir.
                                                                              • The variables prefixed with _ are “special” globals that are automatically set by the interpreter

                                                                              Testing/feedback is appreciated! https://www.oilshell.org/

                                                                              1. 1

                                                                                I think you should quote $0, since the path may contain a space. I’d be surprised if shellcheck didn’t complain about this, although I haven’t tested.

                                                                                1. 1

                                                                                  No, as its zsh, so unless the ~/.zshrc file sets the SH_WORD_SPLIT option, it won’t split on whitespace.

                                                                                  (If only deploying to systems with a modern env, then #!/usr/bin/env -S zsh -f would avoid sourcing files and risks of custom options affecting this)

                                                                              1. 1

                                                                                I literally don’t see a problem with this proposal, the certificates they’re talking about seem to be at least as secure as the certificates we use now. And I would love if browsers could improve security somewhat by warning me if I for example enter a credit card number on a site without such a QWAC certificate, same way as http:// sites are marked when you try to enter a password.

                                                                                Only critisism I have is the name. QWAC sounds like quack, as in charlatan, expect puns from the security community in 3.. 2.. 1..

                                                                                1. 16

                                                                                  Not really, some of the CAs failed to pass basic sanity checks by Mozilla to be included in the past.

                                                                                  There should have been a discussion with the browser vendors and they should have been much more involved.

                                                                                  Frankly, EU should just fund Firefox development directly for the geopolitical reasons and in turn, Mozilla should get involved in EU security.

                                                                                  1. 9

                                                                                    Strong agree that EU should fund Firefox development, I’d take it one step further and say that Mozilla should be allowed to function on EU money alone; their stated goals seem very compatible, and it would allow Mozilla to get out from Google’s grasp, and allow the EU less dependence on American tech.

                                                                                    But I think this is also why we should be supportive about an initiative like this; instead of saying “EU is coming with stupid plans to make our computer insecure”, we should say “Fantastic you want to look into this, did you know there’s an American nonprofit that already does this, they have a lot of experience with it already, and they could really use some funding, you should talk!”

                                                                                  2. 16

                                                                                    I literally don’t see a problem with this proposal, the certificates they’re talking about seem to be at least as secure as the certificates we use now.

                                                                                    Erh… no. That’s literally half of the article.

                                                                                    Browser vendors have minimum security requirements for CAs. Issuers for QWACs would not have to follow those, but much weaker requirements. There’s even a specific example where exactly that happened.

                                                                                    1. 4

                                                                                      QWAC is the European equivalent of EV certs and has all of the same problems. EV demonstrated very clearly that such notices do not work, and actively undermine security.

                                                                                      They do however make certificates more expensive (no chance you’re getting a free one), and break automation (can’t issue without real authentication of identity), both running counter to our established understanding of how to ensure a secure PKI.

                                                                                      You’re also assuming the CAs don’t mis-issue certificates but browser would be required to accept certificates from CAs that fail even the most basic sanity checks, let alone the full security and correctness requirements of the existing root stores.

                                                                                      1. 4

                                                                                        I think the idea behind QWAC, that every cert is tied to a legal entity is not necessarily a terrible one. The implementation however, well obviously they screwed that up, by allowing companies like Camerfirma who can’t figure out basic TLS certs. They are literally solving the symptom and not the actual problem.

                                                                                        Perhaps the current govt business licensing process could be amended to also generate a way that then current CA’s can link a business license to a cert. Obviously that’s a big complicated mess, since business licenses are distributed across cities and counties in the USA at least, though I imagine the EU and others operate similarly.

                                                                                        But that is the only way to do this properly, If we really want a strong link between entities and TLS certs, the existing business licensing process needs to get amended to include domains and TLS certs. Perhaps the laziest implementation would just be amending the business license application form(s) to include domain names along with phone #’s and addresses.

                                                                                        1. 7

                                                                                          It is a terrible idea. It has literally already existed. It was called Extended Validation.

                                                                                          Things it did: made certificates much more expensive (>$100, initially >$1000) making people want longer loved certs, the identity authentication breaks automation, making it harder to automate rolling, further complicating renewal. So EV certs cause problems for actually maintaining a sites PKI.

                                                                                          For the benefits: none. I mean that seriously, browsers did multiple studies and just like the old padlock they found that a positive indicator carries negligible information as it is generally ignored. It gets even worse though: a company’s public name does not necessarily match their legal name, which confuses users so they don’t trust legitimate sites. Then for a double whammy: legal names are not unique, so you can make a fraudulent site with the EV (or QWAC this time round) have a “legitimate” name on any url you like.

                                                                                          That’s why browsers dropped EV certs: they make it harder to run secure sites well, and at best they don’t do anything to help make users safer, and at worse confuse and mislead them.

                                                                                          1. 1

                                                                                            QWAC and EV are both stupid, I’m 100% with you here. Nobody thought about the problem very hard coming up with these solutions, as they both suck.

                                                                                            The idea of having some way to know that example.com is tied to business entity Example, Inc in this jurisdiction is not a bad idea. Of course there can be 500 Example Inc’s, but there can only be 1 Example inc in Olliejville, NC USA. i.e. local jurisdictions already know how to distinguish different businesses within their control.

                                                                                            Most jurisdictions require a license to do business in Olliejville. If every city, county, state, etc just added a domains field to their applications and forms, we could then create larger indexes easily enough and have this information. That’s enough. Governments already know how to handle business licenses and having them add 1 more field is not a big deal. Of course it’s not perfect and it would take considerably longer, but it’s arguably a way better solution than QWAC or EV. Centralizing this is mostly idiotic. Once this is deployed to some reasonable amount of areas and indexes get made, perhaps it makes sense for CA’s to create TLS certs with this information, basically filling out the city, town and name fields for the domain in question for you. No verification needed, they just need to trust local govt X and Y to get the information correct.

                                                                                            Note, this “solution” I just created off the top of my head, I’m sure better ones exist if someone thought about it harder than I did.

                                                                                            The idea of having a legal entitity <-> domain is not a bad one. Obviously our past and proposed implementations where some crappy company is supposed to verify all this is idiotic, they have ZERO incentive to get any of this correct and will just do the bare minimum until it’s useless information, just like EV certs were. I agree with you there. Local Govts are not in the same boat, they have incentives to get the data right.

                                                                                            1. 3

                                                                                              Your legal entitity <-> domain mapping cannot be done in a way that helps users.

                                                                                              The researcher who got an EV cert for their local stripe, inc could have turned around and got strípe.com, and a local government’s records will be able to say stripe, inc <-> strípe.com. A user will see strípe.com and the big “you can trust this” UI the browser is forced to show will even say “the government agrees, this site definitely belongs to Stripe, inc”.

                                                                                              Similarly if a user is shown a company name “Foo” on bar.com, what are they meant to think? It is exceedingly common for the legal name of a company to be completely different from the brand name. So users are have to decide which one to trust.

                                                                                              The only field that actually demonstrate the identity of a website is the url. Anything else is irrelevant.

                                                                                              There is nothing you can add to a certificate, and no field you can require the UI to display, that is more trustworthy than the domain.

                                                                                              It isn’t even a matter of local governments having a interest in keeping those records accurate. A local shop selling painting supplies called Stripe, can register their domain as being str1pe.com, and should be able to get the magic certificate flag. I’m going to go out on a limb and say that their security doesn’t match the security of the american Stripe, Inc

                                                                                              You’re also making an assumption about what the “best interests” of such an organization is: Plenty of counties, states, or even countries, make significant income from business registrations, for companies that do not in any meaningful sense exist in those locations. Their interest is in having companies be registered, and making that as painfree for the companies as possible, that means accepting the urls they use.

                                                                                              You’re assuming that these CAs are really doing any serious validation, which even when EV was restricted to the high end CAs they were not, and were happily issuing EV certs with obvious incorrect information (https://scotthelme.co.uk/extended-validation-not-so-extended/).

                                                                                              The find question is what happens when a CA encounters an EV cert for a company whose name matches another more “famous”/important company? Because past research says they’ll blindly revoke the 100% compliant, correct, and accurate certificate and keep the money.

                                                                                              There is no certificate identity <-> domain mapping that adds security, as the only thing that matters is the url.

                                                                                              1. 1

                                                                                                In every single comment I have said EV is bad, yet you haven’t seemed to grasp that I said that. let’s try again: EV CERTS ARE STUPID. Can we move on from all of that now? You seem to have not listened at all to what I said. Stop thinking about browser UI or web security, that’s not remotely my point.

                                                                                                It should be relatively easy to track down out in the physical real world the person/persons responsible for example.com. This really only matters when an entity is doing business via example.com, so it really only needs to apply to business entities. Hence adding a domain field to existing business licenses solves the problem. It’s the same as a telephone # or an address.

                                                                                                1. 1

                                                                                                  Ok, I’ve re-read, so I want to clarify. Are you saying it is reasonable for a local government’s company listings to include a url? e.g identity->url (I honestly assumed most would have that now in contact info sections, but governments are slow), in that case I agree it seems useful.

                                                                                                  You use <-> which I assumed meant the cert would also have a legal name style entry that was somehow “special” and get UI treatment that would make it seem trustworthy (vs the already present subject organization name, which is intentionally not distinguished from any other field in most cert viewers)

                                                                                                  1. 1

                                                                                                    Are you saying it is reasonable for a local government’s company listings to include a url?

                                                                                                    Yes. I can’t say with any certainty about most business license forms, but all the ones I’ve ever filled out have never asked for this information. I’ve occasionally seen an email address field though. :)

                                                                                                    You use <-> which I assumed meant the cert would also have a legal name style entry that was somehow “special” and get UI treatment that would make it seem trustworthy (vs the already present subject organization name, which is intentionally not distinguished from any other field in most cert viewers)

                                                                                                    No, we already know this is stupid and would be a terrible way to do it.

                                                                                                    If one wanted to do something like this, arguably a better way to do this would have the local city/etc govt cross-sign an existing TLS cert(say from lets encrypt) with their own, saying, we attest(sign) that this cert belongs to this company. This can all be done with ACME (I’m pretty sure, it’s been a while since I’ve read the spec, but I think I remember it’s fine) in an automated fashion so it’s not a big deal to add to the existing workflow. This doesn’t change the security at all, and doesn’t require any UI changes.

                                                                                                    1. 1

                                                                                                      Ok, so we do agree - I just misinterpreted <-> as meaning you wanted a bidirectional relationship :)

                                                                                                      1. 1

                                                                                                        Well I do, but it can require some work, again the point isn’t that it be all up in your face, the point is, the mapping exists, so if one needs the mapping for some purpose(law enforcement, or research or whatever), it can be done reasonably. If for some reason their exists a valid use-case to make it easy and all up in your face, like a WEB UI change, it could be added eventually, but that use-case is far from certain or clear at this point in time. We know from the EV debacle that it’s probably a disaster to just assume it’s useful from day one. I for one am not advocating for any UI changes.

                                                                                      1. 4

                                                                                        “this wouldn’t have happened with ZFS” is a strange conclusion to come to after a user error. Also: I’d recommend a mundane backup strategy. having to package something smells of novelty. Although I’ve not heard of the system they mention, it might be fine.

                                                                                        1. 6

                                                                                          ZFS would have told you why the drive wasn’t in the array anymore, with a counter showing how many checksums failed (the last column in zpool status, it should be 0). The author would thus have known there was something wrong with the SSD, and think twice before mindlessly adding it to the array.

                                                                                          I’m not entirely sure what would happen if you add the SSD back to the array anyway, at the very least you must give it a clean bill of health with zpool clean. I would also expect that ZFS urges or maybe even forces you to do a resilver of the affected device, which would show the corruption again. The main problem with mdadm in this case was that when re-adding the device, it found it was already part of the array before and decided to trust it blindly, not remembering that it was thrown out earlier, or why.

                                                                                          1. 3

                                                                                            ZFS should resilver when you add the drive back the array and verify/update the data on the failed drive.

                                                                                          2. 5

                                                                                            the readme in the repo for that project says in bold text that it is experimental. which is exactly what i would avoid if i was looking for some reliable backup system… but to each their own.

                                                                                            1. 5

                                                                                              How was this user error? This raid array silently corrupted itself. Possibly because of the ram?

                                                                                              the filesystem in the treefort environment being backed by the local SSD storage for speed reasons, began to silently corrupt itself.

                                                                                              ZFS checksums the content of each block, so it would have been able to tell you that what you wrote is not what is there anymore. It could also choose the block from the disk that was NOT corrupted by matching the checksum. It would have also stopped changing things the moment it hit inconsistencies.

                                                                                              1. 2

                                                                                                The drive failed out of the array and they added it back in.

                                                                                                1. 4

                                                                                                  Yeah, but why did the array think it was fine when it had previously failed out?

                                                                                                  1. 2

                                                                                                    I don’t know, it’s a reasonable question but doesn’t change that fundamentally it was a user mistake. ZFS may have fewer sharp edges but it’s perfectly possible to do the wrong thing with ZFS too.

                                                                                            1. 31

                                                                                              LE is clearly a NOBUS project.

                                                                                              This seems like a very baseless accusation.

                                                                                              1. 21

                                                                                                Also a wrong one. LE could impersonate your site, but it will be visible via Certificate Transparency. If you have opted out of getting a cert entirely, then any CA will be able to do this.

                                                                                                1. 4

                                                                                                  How could LE impersonate your site without control over DNS? Or are you assuming a bad actor that circumvents the challenges

                                                                                                  1. 1

                                                                                                    I used “impersonate” as a shorthand for “issue a valid X.509 certificate for your site”, but yeah - it wouldn’t be useful unless LE could get on-path somehow, e.g. MITM or DNS

                                                                                                    1. 1

                                                                                                      is it MITM or DNS or MITM AND DNS?

                                                                                                      they would need to MITM let’s encrypt’s proof mechanisms (the machines that do the HTTP or DNS challenges)

                                                                                                      then of course they’d need to MITM the person they’re attacking too… just seems infeasible. on the other hand if you lose control of DNS all bets are off

                                                                                                2. 12

                                                                                                  Especially since there are at least 3 or 4 free/gratis certificate providers using the ACME protocols.

                                                                                                  1. 2

                                                                                                    Could you point to one or two? Are they broadly supported by major browser vendors?

                                                                                                    1. 2

                                                                                                      As of now, I know of:

                                                                                                      • Let’s Encrypt (USA)
                                                                                                      • Buypass (Norway, horrible pun)
                                                                                                      • ZeroSSL (USA, EU & UK)

                                                                                                      Meaning that if you want to avoid USA based CAs, your only option is Buypass. If you want to obtain certificates through other means than ACME, I think ZeroSSL is the only option (I know the other two don’t allow it), but they have encumbered this feature with a reCaptcha tracker, so I’ve opted not to try it.

                                                                                                      1. 1

                                                                                                        Thanks!

                                                                                                1. 7

                                                                                                  I thought iframe already exists?

                                                                                                  1. 11

                                                                                                    Apples and oranges. An iframe is a view object with no content associated with it. It also loads a different page, which is unnecessary and heavyweight.

                                                                                                    1. 4

                                                                                                      And yet iframes are proposed as a sandboxing mechanism for potential use for blocks here.

                                                                                                      1. 2

                                                                                                        heavy weight being a bad thing depends™, its a perf / security tradeoff. If the block is heavy its a good thing because it would take it off the main thread for both webworkers and iframes. So depending on the block it could be a security + perf win.

                                                                                                    1. 18

                                                                                                      I use FreeBSD (if I’m going to use Unix, I might as well use one with good taste), but:

                                                                                                      • UFS2 is absolutely not a good filesystem. It’s very fragile relative to ext4, which itself isn’t great. ZFS is excellent, but the problem is for small systems (i.e. VMs), it can be quite heavyweight. It’d be nice to have a better filesystem for the smaller scale stuff, or have ZFS fit under a gig of RAM.
                                                                                                      • I think still advertising jails as if they’re a contender in 2022 is misleading. They completely missed the boat with tooling, let alone containerization trends.
                                                                                                      • My problem with Bhyve is guest support, but that’s why I run ESXi.
                                                                                                      1. 6

                                                                                                        I am similarly biased towards FreeBSD (if I’m going to use an implementation of bad ideas from the ‘70s, at least I’d like a clean and consistent implementation of those bad ideas) and wanted to amplify this point

                                                                                                        I think still advertising jails as if they’re a contender in 2022 is misleading. They completely missed the boat with tooling, let alone containerization trends.

                                                                                                        Jails are a superior mechanism for doing shared-kernel virtualisation to the mixture of seccomp-bpf, cgroups, and namespaces that can be assembled on Linux to look like jails. Lots of FreeBSD-related articles like to make that point and they are completely missing the value of the OCI ecosystem. Containers are a mix of three things:

                                                                                                        • A reproduceable build system with managed dependencies and caching of intermediate steps. FreeBSD has some of this in the form of poudriere, but it’s very specialised.
                                                                                                        • A distribution and deployment format for self-contained units.
                                                                                                        • An isolation mechanism.

                                                                                                        Of these, the isolation mechanism is the least important. Even on Linux, there’s a trend to just using KVM to run a separate kernel for the container and using FUSE-over-VirtIO to mount filesystems from the outside. The overhead of an extra cut-down Linux kernel is pretty small in comparison to the size of a large application.

                                                                                                        The value in OCI containers is almost entirely in the distribution and deployment model. FreeBSD doesn’t yet have anything here. containerd works on FreeBSD (and with the ZFS snapshotter, works well) but runj is still very immature.

                                                                                                        My problem with Bhyve is guest support, but that’s why I run ESXi.

                                                                                                        I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.

                                                                                                        Bhyve may or may not be better than KVM but the separation of concerns is weaker. There’s a lot of exciting stuff (e.g. Kata Containers) that’s being built on top of KVM. Windows now provides a set of APIs to Hyper-V that are a direct equivalents to the KVM ioctls, which means that it’s easy to build systems that are portable between KVM and Hyper-V. There’s no equivalent for bhyve.

                                                                                                        UFS2 is absolutely not a good filesystem. It’s very fragile relative to ext4, which itself isn’t great. ZFS is excellent, but the problem is for small systems (i.e. VMs), it can be quite heavyweight. It’d be nice to have a better filesystem for the smaller scale stuff, or have ZFS fit under a gig of RAM.

                                                                                                        I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).

                                                                                                        1. 2

                                                                                                          I’m not sure what this means. Bhyve exposes the same VirtIO devices as KVM.

                                                                                                          VMware has drivers for weirder guest OSes (including older versions of mainstream stuff… you know, NT), KVM doesn’t. That and I’ve had very bad experience with KVM virtio, but that doesn’t reflect on Bhyve

                                                                                                          I haven’t used UFS2 for over a decade but I’ve run ZFS on systems with 1GiB of RAM with no problem. The rule of thumb is 1GiB of RAM per 1TiB of disk. Most of my VMs have a lot less than 1 TiB of disk. You need to clamp the ARC down a bit, but the ARC is less important if the disks are fast (and they often are in VMs).

                                                                                                          This is probably my own paranoia fed by misinfo (or just plain outdated info) about ZFS resource usage.

                                                                                                          1. 4

                                                                                                            ZFS doesn’t really need that much RAM. The ARC is supposed to yield to your programs’ demand. But if you’re not comfortable with how much it occupies you can just set arc_max to a small size.

                                                                                                            1. 5

                                                                                                              The ZFS recommendations seem to be based around the idea that you want the best performance out of ZFS, running large NFS/iSCSI/SMB hosts with many users. I think FreeNAS also set the bar high just so that users trying to use minimal hardware would not have a reason to complain if it didn’t work very well for them.

                                                                                                              However, in practice, I rarely need top performance out of ZFS, so even with 512MB of RAM I can use it comfortably for a small VM with just a few services. Granted this was a few years ago, so maybe 1GB is needed nowadays.

                                                                                                            2. 2

                                                                                                              small systems (i.e. VMs)

                                                                                                              NFS from host?

                                                                                                              1. 1

                                                                                                                I use ESXi as my host, so probably not.

                                                                                                                1. 2

                                                                                                                  Shared, from another guest, then?

                                                                                                              2. 1

                                                                                                                I wouldn’t say UFS is great, but I really think Ext4 is worse. I think at least part of the bad reputation is also coming from the fact that UFS isn’t as much info version numbers. UFS-implementaions change.

                                                                                                                But to not make this FreeBSD vs Linux, see XFS vs Ext4 where there’s a similar situation. Every time Ext4 gets an edge, like metadata speed XFS ends up being improved surpassing Ext4 again.

                                                                                                                Similar things can be said about UFS, at least for the remark of it being “fragile”.

                                                                                                                But I’d like to hear it if you have anything to bank that claim.

                                                                                                                That said I would have agreed with you there about a decade ago.

                                                                                                              1. 2

                                                                                                                I have a perfectly functioning iPad 3rd generation, and an iPhone 5 here. None of the components need replacing. However, the operating systems are so old that in the case of the iPhone 5, it can’t run some of the apps I’d like to run on it, and in the case of the iPad 3rd gen, it can’t even visit a lot of websites anymore.

                                                                                                                I do agree with the article that there might be some trade-offs that can justify making it harder to physically open up the device and replace defective or outdated components. This might be a bit besides the point that the author is trying to make, but I cannot imagine a single technical reason why the device should refuse being updated to anything but the newest supported operating system from its manufacturer, without an override setting for me.

                                                                                                                1. 5

                                                                                                                  I would propose we take right to repair even further and demand openness after a device is no longer supported. When a manufacturer decides to roll a device off of support, they need to provide a mechanism to unlock it and a base set of specifications. Then others could take those specs and provide a longer lifespan using open source (or even not open!).

                                                                                                                  1. 4

                                                                                                                    Supporting older hardware is actually very hard, just because you don’t see a difference doesn’t mean the actual internals are different. For example there simply is no 32bit version of macOS or ios, and there hasn’t been for years. The apple CPUs literally do not have 32bit hardware.

                                                                                                                    Supporting older hardware can mean holding back OS security models because old hardware has components that are no longer supported by the original vendors, or hardware that is simply incompatible with more modern system architectures. Supporting such can mean holding back the security on new systems, etc

                                                                                                                    Again, supporting old hardware is not free, and often can be as much work as bringing up new hardware.

                                                                                                                    1. 2

                                                                                                                      I have a 10 year old iPad mini. Works absolutely fine. After the last update the Disney+ app doesn’t work on it and I can’t get Disney+ on the browser. Made me furious at Disney+ but not enough to walk away, so in the end I was not the change I hoped to be.

                                                                                                                    1. 2

                                                                                                                      This is very well done. Explaining all the checks step-by-step is definitely a good way to help people understand this tedious and complex process that is validating email senders.

                                                                                                                      There seems to be a bug with DKIM key retrieval though, because it states that my email doesn’t pass DKIM verification. However, it does pass it successfully on https://mail-tester.com. This could be a problem in the DNS record parsing, as I formatted mine with multiple chunks enclosed in “” (for the multi-line public key).
                                                                                                                      Now I’m genuinely curious to know if that’s a bug in the tester (which I hope!), or if my emails would eventually be dropped by some other mailers because of that formatting. Would anyone have an insight on this ?

                                                                                                                      1. 2

                                                                                                                        I got the same error - it claimed that no DKIM was present, even though it is (and parties like Google seem to accept it).

                                                                                                                        1. 0

                                                                                                                          I got a bug where I’ve used a:mail.example.com in my SPF policy, and the IPv6 address I sent from doesn’t match according to the tester. The mail is still accepted due to DKIM (I don’t have the problem you mentioned).

                                                                                                                          I didn’t try if it works better with IPv4, but it seems there are some disturbances in the force.

                                                                                                                          So apparently there was something wrong with my SPF policy after all; it had a syntax error. But learndmarc just told me that my mail matched -all, so it just hopped over tokens it didn’t understand instead of telling me my SPF policy was syntactically wrong. When you press the looking glass next to the domain on the right side, you get sent to a page that checks your SPF policy and shows the error.

                                                                                                                        1. 3

                                                                                                                          This is wrong on so many levels…

                                                                                                                          • ./good-cat a b | …
                                                                                                                          • ./good-cat -n a > numbered-a.txt
                                                                                                                          • So called UUoC is mostly a myth and unfounded hate.

                                                                                                                          OK, it is satire, but still too bad.

                                                                                                                          1. 4

                                                                                                                            This could be easily fixed with an extra test, only check for pipes if [ $# -eq 1 ]

                                                                                                                          1. 62

                                                                                                                            I don’t get all of the “useless use of cat” hate. Shell is for ad-hoc stream-of-conciousness scripting. It’s not supposed to be perfect or optimised. If you think about the pipeline starting with the file so that

                                                                                                                            cat file | do something | do something else
                                                                                                                            

                                                                                                                            makes more sense than the counterintuitive ordering of

                                                                                                                            do something < file | do something else
                                                                                                                            

                                                                                                                            then more power to you, do what make sense to you. A human is not going to notice the performance penalty of copying the bytes a few more times in an interactive shell session.

                                                                                                                            The “useless use of cat” meme is just a “gotcha” superiority complex with no bearing on reality besides somebody getting to feel like they “corrected” you. How many other random tiny insignificant performance hits fly by your smugness every day that don’t enter the memeosphere so you don’t get to feel smart about repeating them?

                                                                                                                            If you think it’s a real problem, then detect this case in the shell and change the execution strategy there. You don’t get to feel smug, but you will solve the actual “problem”.

                                                                                                                            1. 16

                                                                                                                              than the counterintuitive ordering of

                                                                                                                              It doesn’t have to be counterintuitive. This works too:

                                                                                                                              < file do something | do something else
                                                                                                                              

                                                                                                                              There’s a slight benefit to the redirecting this way in that the file remains seekable. (It doesn’t with cat)

                                                                                                                              1. 18

                                                                                                                                How do you produce shell pipelines? For me, it’s always stepwise: I run something, visually inspect the output, then up-arrow and | to another command that transforms the output somehow. Repeat until I get what I want. < file fails this test, I think?

                                                                                                                                1. 3

                                                                                                                                  I don’t follow. Like you, I build shell pipelines “stepwise. I run something, visually inspect the output, then up-arrow and | to another command that transforms the output somehow. Repeat until I get what I want.”

                                                                                                                                  How is cat file | better (or worse) than < file? In other words, if my original pipeline begins < file, can’t I keep using up-arrow and tweaking what follows, just as I can with cat file |?

                                                                                                                                  Maybe you’re thinking of < file at the end of the pipeline? If so, I think that viraptor’s whole point was that you can move < file to the front of the pipeline—exactly where cat file | sits.

                                                                                                                                  1. 5

                                                                                                                                    If my original pipeline begins < file . . .

                                                                                                                                    That doesn’t pass my test, because < file doesn’t produce any output by itself. If you start off with < file | something else then sure, but I’ve never done that! I find it nonintuitive. But if it works for you, groovy.

                                                                                                                                    1. 8

                                                                                                                                      Hmm, it displays the file in zsh, but apparently not in bash.

                                                                                                                                      But now I know how to annoy both groups!

                                                                                                                                      < myfile cat | wc -l
                                                                                                                                      
                                                                                                                                      1. 1

                                                                                                                                        That absolutely makes sense. Thanks for clarifying. (I imagined you were starting with cat file | something. If I first wanted to check the contents of file, I sometimes do the same as you describe: cat file and then cat file | whatever. Other times I do less file first because then I can bounce around in the file more easily.)

                                                                                                                                    2. 2

                                                                                                                                      I’m not sure that’s something I ever gave any attention. I mean, it’s slightly different and I don’t mind ¯\(ツ)

                                                                                                                                    3. 1

                                                                                                                                      This doesn’t work in /bin/sh on my latest MacOS.

                                                                                                                                      1. 5

                                                                                                                                        Maybe something else is going on?

                                                                                                                                        macOS 12.1, bash 3.2 (at /bin/sh), and it works fine here.

                                                                                                                                        sh-3.2$ < wtf.c grep assert | sed 's/assert/wtf/'
                                                                                                                                        #include <wtf.h>
                                                                                                                                        	wtf(sodium_init() != -1);
                                                                                                                                        
                                                                                                                                        1. 3

                                                                                                                                          True, I should have clarified what I was trying to do. Your example works for me, but there are other cases where < doesn’t work, while cat does;

                                                                                                                                          This doesn’t work (but it works in some other shells):

                                                                                                                                          data="$(<file)"
                                                                                                                                          

                                                                                                                                          This works:

                                                                                                                                          data="$(<file cat)"
                                                                                                                                          
                                                                                                                                          1. 1

                                                                                                                                            I’m sorry, but I still think that there may be something else going on. I can use "$(<file)" to assign the contents of a file to a variable in bash 3.2 on macOS.

                                                                                                                                            sh-3.2$ data="$(< wtf.c)"
                                                                                                                                            sh-3.2$ printf "${data}\n"
                                                                                                                                            #include <assert.h>
                                                                                                                                            #include <sodium.h>
                                                                                                                                            int main()
                                                                                                                                            {
                                                                                                                                            	assert(sodium_init() != -1);
                                                                                                                                            	return 0;
                                                                                                                                            }
                                                                                                                                            

                                                                                                                                            What are you trying to do next?

                                                                                                                                            Re your larger point, you say “there are other cases where < doesn’t work, while cat does” and “This doesn’t work (but it works in other shells).” I think I (sort of?) agree. cat and < are different, and there are an enormous number of differences (e.g., for example) between different shells and even different versions of the same shell. (/bin/sh on macOS is currently 3.2, but I usually run bash 5.1 from MacPorts. Those two have a lot of differences.)

                                                                                                                                            Nevertheless, I think that the OP’s point stands: < file do something generally works as well as cat file | do something. I am not at all a purist about UUoC—like ketralnis, I think that people who say UUoC are generally just being jerks. But, all of that said < file is also important to learn. It often comes in handy, and you can often substitute it for cat file |—though, again, I agree that they are not 1 to 1.

                                                                                                                                    4. 15

                                                                                                                                      I agree! This project is a joke. Perhaps that should be at the end of the README. If cat wanted to not be pipeable, then it wouldn’t be pipeable. If someone actually measures a performance problem because they’re working with a stream of bytes instead of a random access file descriptor, then they should change it. The left-to-right reading of cating first is nice!

                                                                                                                                      1. 3

                                                                                                                                        I agree it’s rare that this is an actual problem in a program, to me it’s more that it’s an indicator that the writer might have some things to learn including possible:

                                                                                                                                        • the various ways you can pipe stuff into a program in addition to |.
                                                                                                                                        • that cat can also be used for concatenating file, it’s not just for printing a file’s contents.

                                                                                                                                        Once you picked that signal up, dealing with it in a way that’s not about stroking your own ego is of course a good idea.