1. 5

    I have been a huge fan of the OSX project called homebrew. […] [FreeBSD ports] is based on standard BSD makefiles, which while not as nice as homebrew’s ruby-based DSL, are very powerful.

    I find it hard to swallow an article telling me that my infrastructure should be boring, and then praising a Makefile remake in Ruby as “nice”. It’s too bad, because I do agree with the gist of the article.

    1. 2

      Are you well versed in homebrew or is “makefile remake in ruby” your take from the outside?

      1. 1

        I noticed the author’s comparison between BSD Makefiles and “homebrew’s ruby-based SDL”, which I assume means that Homebrew uses it’s own Ruby-based SDL instead of Makefiles. I’d be interested to know if that is not correct, and what the author actually meant.

        1.  

          DSLs are subjective. Instead of you assuming what the author meant you could take a look at the homebrew DSL and make your own judgement. For example, here’s the erlang one:

          I know which one I prefer working with, but I also know which one I would rely on for reproducible builds.

    1. 2

      I think that’s a reasonable rational and I wonder if DoH is going to end up being OS supported at some point.

      1. 4

        Absolutely not! Why the hell would you want to centralise something that was decentralised since before Al Gore invented the internet?

        1. 5

          What? How would providing a DoH at an OS level centralize anything more than providing dns over tcp?

          Edit: It occurs to me that perhaps you thought I meant dns over https (DoH) as is implemented by firefox, ie with cloudflare being the defacto resolver. What I meant was that I wonder if DoH might come to be provided as a an alternative to or super set of normal OS DNS support with some sort of resolver discovery.

          1. 2

            Maybe cnst is talking about CAs.

            1. 1

              DoH/DoT don’t inherently require CAs. The OS could offer an interface like “set IP address and expected certificate in resolv.conf”, for example. (but, IMO, concerns about CAs are silly. Everything in userspace WILL use CAs, why would an OS take a hard stance against CAs?)

        2. 2

          I’m still not convinced that we need DoH in the OS. What does DoH gives us that DoT doesn’t?

          1. -1

            What does DoH gives us that DoT doesn’t?

            Transport encryption.

            1. 3

              What does the T in dot stand for?

              1. 1

                TCP

                1. 6

                  No, it’s TLS.

                  1. 1

                    Is it? My bad.

                      1. 2

                        Conventional DNS is a UDP protocol ;)

                        1. 5

                          Primarily UDP, but TCP if the response it too large and EDNS is not supported; also for zone transfers.

          1. 7

            The actual changelog. Maybe we could link that directly instead of a really brief blurb?

            1. 6

              The changelog is almost 10,000 lines, I’d prefer the blurb no matter how brief :D

              1. 31

                My position has essentially boiled down to “YAML is the worst config file format, except for all the other ones.”

                It gets pretty bad if your documents are large or if you need to collaborate (it’s possible to have a pretty good understanding of parts of YAML but that’s not always going to line up with what your collaborators understand).

                I keep wanting to say something along the lines of “oh, YAML is fine as long as you stick to a reasonable subset of it and avoid confusing constructs,” but I strongly believe that memory-unsafe languages like C/C++ should be abandoned for the same reason.

                JSON is unusable (no comments, easy to make mistakes) as a config file format. XML is incredibly annoying to read or write. TOML is much more complex than it appears… I wonder if the situation will improve at any point.

                1. 21

                  I think TOML is better than YAML. Sure, it has the complex date stuff, but that has never caused big surprises for me (just small annoyances). The article seems to focus mostly on how TOML is not Python, which it indeed is not.

                  1. 14

                    It’s syntactically noisy.

                    Human language is also syntactically noisy. It evolved that way for a reason: you can still recover the meaning even if some of the message was lost to inattention.

                    I have a mixed feeling about TOML’s tables syntax. I would rather have explicit delimiters like curly braces. But, if the goal is to keep INI-like syntax, then it’s probably the best thing to do. The things I find really annoying is inline tables.

                    As of user-typed values, I came to conclusion that everything that isn’t an array or a hash should just be treated as a string. If you take user input, you cannot just assume that the type is correct and need to check or convert it anyway, so why even bother having different types at the format level?

                    Regardless, my experience with TOML has been better than with alternatives, despite its flaws.

                    1. 6

                      Human language is also syntactically noisy. It evolved that way for a reason: you can still recover the meaning even if some of the message was lost to inattention.

                      I have a mixed feeling about TOML’s tables syntax. I would rather have explicit delimiters like curly braces. But, if the goal is to keep INI-like syntax, then it’s probably the best thing to do. The things I find really annoying is inline tables.

                      It’s funny how the exact same ideas made me make the opposite decision. I came to the conclusion that “the pain has to be felt somewhere” and that the config files are not the worst place to feel it.

                      I have mostly given up on different config formats and just default to one of the following three options:

                      1. Write .ini or Java properties-file style config-files when I don’t need more.
                      2. Write a dtd and XML when I need tree or dependency-like structures.
                      3. Store the configuration in a few tables inside an RDBMS and drop an .ini-style config file with just connection settings and the name of the config tables when things get complex.

                      As of user-typed values, I came to conclusion that everything that isn’t an array or a hash should just be treated as a string. If you take user input, you cannot just assume that the type is correct and need to check or convert it anyway, so why even bother having different types at the format level?

                      I fully agree with this well.

                    2. 23

                      Dhall is looking really good! Some highlights from the website:

                      • Dhall is a programmable configuration language that you can think of as: JSON + functions + types + imports
                      • You can also automatically remove all indirection in any Dhall code, converting the file to a logic-free normal form for non-programmers to understand.
                      • We take language security seriously so that your Dhall programs never fail, hang, crash, leak secrets, or compromise your system.
                      • The language aims to support safely importing and evaluating untrusted Dhall code, even code authored by malicious users.
                      • You can convert both ways between Dhall and JSON/YAML or read Dhall configuration files directly into a language that supports a native language binding.
                      1. 8

                        I don’t think the tooling should be underestimated, too. The dhall executable includes low-level plumbing tools (individual type checking, importing, normalization), a REPL, a code formatter, a code linter to help with language upgrades, and there’s full blown LSP integration. I enjoy writing Dhall so much that for new projects I’m taking a more traditional split between a core “engine”, and then pushing out the logic into Dhall - then compiling it at a load time into something the engine can work with. The last piece of the puzzle to me is probably bidirectional type inference.

                        1. 2

                          That looks beautiful! Can’t wait to give it a go on some future projects.

                          1. 2

                            Although the feature set is extensive, is it really necessary to have such complex functionality in a configuration language?

                            1. 4

                              It’s worth understanding what the complexity is. The abbreviated feature set is:

                              • Static types
                              • First class importing
                              • Function abstraction

                              Once I view it through this light, I find it easier to convince myself that these are necessary features.

                              • Static types enforce a schema on configuration files. There is almost always a schema on configuration, as something is ultimately trying to pull information out of it. Having this schema reified into types means that other tooling can make use of the schema - e.g., the VS Code LSP can give me feedback as I edit configuration files to make sure they are valid. I can also do validation in my CI to make sure my config is actually going to be accepted at runtime. This is all a win.

                              • Importing means that I’m not restricted to a single file. This gives me the advantage of being able to separate a configuration file into smaller files, which can help decompose a problem. It also means I can re-use bits of configuration without duplication - for example, maybe staging and production share a common configuration stanza - I can now factor that out into a separate file.

                              • Function abstraction gives me a way to keep my configuration DRY. For example, if I’m configuring nginx and multiple virtual hosts all need the same proxy settings, I can write that once, and abstract out my intention with a function that builds a virtual host. This avoids configuration drift, where one part is left stale and the rest of the configuration drifts away.

                              1. 1

                                That’s very interesting, I hadn’t thought of it like that. Do you mostly use Dhall itself as configuration file or do you use it to generate json/yaml configuration files?

                            2. 1

                              I finally need to implement Dhall evaluator in Erlang for my projects. I <3 ideas behind Dhall.

                            3. 5

                              I am not sure that there aren’t better options. I am probably biased as I work at Google, but I find Protocol Buffer syntax to be perfectly good, and the enforced schema is very handy. I work with Kubernetes as part of my job, and I regularly screw up the YAML or don’t really know what the YAML is so cutty-pasty from tutorials without actually understanding.

                              1. 4

                                Using protobuf for config files sounds like a really strange idea, but I can’t find any arguments against it.
                                If it’s considered normal to use a serialisation format as human-readable config (XML, JSON, S-expressions etc), surely protobuf is fair game. (The idea of “compiled vs interpreted config file” is amusing though.)

                                1. 3

                                  I have experience with using protobuf to communicate configuration-like information between processes and the schema that specifies the configurations, including (nested) structs/hashes and arrays, ended up really hacky. I forgot the details, but protobuf lacks one or more essential ingredients to nicely specify what we wanted it to specify. As soon as you give up and allow more dynamic messages, you’re of course back to having to check everything using custom code on both sides. If you do that, you may as well just go back to yaml. The enforced schema and multi language support makes it very convenient, but it’s no picnic.

                                  1. 2

                                    One issue here is that knowing how to interpret the config file’s bytes depends on having the protobuf definition it corresponds to available. (One could argue the same is true of any config file and what interprets it, but with human-readable formats it’s generally easier to glean the intention than with a packed binary structure.)

                                    1. 2

                                      At Google, at least 10 years ago, the protobuf text format was widely used as a config format. The binary format less so (but still done in some circumstances when the config file wouldn’t be modified by a person).

                                      1. 3

                                        TIL protobuf even had a text format. It sounds like it’s not interoperable between implementations/isn’t “fully portable”, and that proto3 has a JSON format that’s preferable.. but then we’re back to JSON.

                                2. 2

                                  JSON can be validated with a schema (lots of tools support it, including VSCode), and it’s possible to insert comments in unused fields of the object, e.g. comment or $comment.

                                  1. 17

                                    and it’s possible to insert comments in unused fields of the object, e.g. comment or $comment.

                                    I don’t like how this is essentially a hack, and not something designed into the spec.

                                    1. 2

                                      Those same tools (and often the system on the other end ingesting the configuration) often reject unknown fields, so this comment hack doesn’t really work.

                                      1. 8

                                        And not without good reason: if you don’t reject unknown fields it can be pretty difficult to catch misspellings of optional field names.

                                        1. 2

                                          I’ve also seen it harder to add new fields without rejecting unknown fields: you don’t know who’s using that field name for their own use and sending it to you (intentionally or otherwise).

                                      2. 1

                                        Yes, JSON can be validated by schema. But in my experience, JSON schema implementations are widely diverging and it’s easy to write schemas that just work in your particular parser.

                                      3. 1

                                        JSON is unusable (no comments, easy to make mistakes) as a config file format.

                                        JSON5 fixes this problem without falling prey to the issues in the article: https://json5.org/

                                        1. 2

                                          Yeah, and then you lose the main advantage of json, which is how ubiquitous it is.

                                          1. 1

                                            In the context of a config format, this isn’t really an advantage, because only one piece of code will ever be parsing it. But this could be true in other contexts.

                                            I typically find that in the places where YAML has been chosen over JSON, it’s usually for config formats where the ability to comment is crucial.

                                      1. 4

                                        I’m happy to see FTP die. But aren’t some websites still providing download links over FTP? I think it was just a year ago when I noticed I was downloading an ISO file from an FTP server..

                                        1. 9

                                          There’s nothing wrong with downloading an ISO from an FTP server. You can verify the integrity of a download (as you should) independently of the mechanism (as many package managers do).

                                          1. 4

                                            I agree! The same goes for downloading files from plain HTTP, as long as you verify the download you know the file is okay.

                                            The reason I don’t like FTP has to do with the mode of operation; port 21 as control channel and then a high port for actual data transfer. Also the fact that there is no standard for directory listings (I think DOS-style listings are the most common?).

                                            1. 2

                                              The reason there’s no standard for directory listings is possibly more to do with the lack of convention on filesystem representation as it took off. Not everything uses the same delimiter, and not everything with a filesystem has files behind it (e.g. Z-Series).

                                              I absolutely think that in the modern world we should use modern tools, but FTP’s a lot like ed(1): it’s on everything and works pretty much anywhere as a fallback.

                                              1. 1

                                                If you compare FTP to ed(1), I’d compare HTTP and SSH to vi(1). Those are also available on virtually anywhere.

                                                1. 1

                                                  According to a tweet by Steven D. Brewer, it seems that at least modern Ubuntu rescue disks only ship nano, but not ed(1) or vi(1)/vim(1).

                                                  1. 1

                                                    Rescue disks are a special case. Space is a premium.

                                                    My VPS running some Ubuntu version does return output from man ed. (I’m not foolish enough to try to run ed itself, I quite like have a usable terminal).

                                              2. 1

                                                Yes, FTP is a vestige of a time where there was no NAT. It was good until the 90s and has been terrible ever since

                                              3. 1

                                                Most people downloading files over FTP using Chrome don’t even know what a hash is, let alone how to verify one.

                                                1. 1

                                                  That’s not really an argument for disabling FTP support. That’s more of an argument for implementing some form of file hash verification standard tbh.

                                                2. 1

                                                  There is everything wrong with downloading an ISO over FTP.

                                                  Yeah, you can verify the integrity independently. But it goes against all security best practice to expect that users will do something extra to get security.

                                                  Security should happen automatically whenever possible. Not saying that HTTPS is the perfect way to guarantee secure downloads. But at the very least a) it works without requiring the user to do anything special and b) it protects against trivial man in the middle attacks.

                                                  1. 1

                                                    But it goes against all security best practice to expect that users will do something extra to get security.

                                                    Please don’t use the term best practice, it’s a weasel term that makes me feel ill. I can get behind the idea that an expectation that users will independently verify integrity is downright terrible UX. It’s not an unrealistic expectation that the user is aware of an integrity failure. It’s also not unrealistic that it requires the user to act specifically to gain some demonstrable level of security (in this case integrity)

                                                    To go further, examples that expect users to do something extra to get security (for some values of security) include:

                                                    1. PGP
                                                    2. SSH
                                                    3. 2FA

                                                    Security should happen automatically whenever possible.

                                                    And indeed, it does. Even over FTP

                                                    Not saying that HTTPS is the perfect way to guarantee secure downloads

                                                    That’s good because HTTPS doesn’t guarantee secure downloads at all. That’s not what HTTPS is designed for.

                                                    You’ve confused TLS (a transport security mechanism) with an an application protocol built on top of TLS (HTTPS) and what it does with the act of verifying a download (which it doesn’t). The integrity check in TLS exists for the connection, not the file. It’s a subtle but important difference. If the file is compromised when transferred (e.g. through web of trust, through just being a malicious file) then TLS won’t help you. When integrity is important, that integrity check needs to occur on the thing requiring integrity.

                                                3. 7

                                                  You got it backwards.

                                                  Yeah, some sites still ofter FTP downloads, even for software, aka code that you’re gonna execute. So it’s a good thing to create some pressure so they change to a more secure download method.

                                                  1. 9

                                                    Secure against what? Let’s consider the possibilities.

                                                    Compromised server. Transport protocol security is irrelevant in that case. Most (all?) known compromised download incidents are of this type.

                                                    Domain hijacking. In that case nothing prevents attacker from also generating a cert that matches the domain, the user would have to verify the cert visually and know what the correct cert is supposed to be—in practice that attack is undetectable.

                                                    MitM attack that directs you to a wrong server. If it’s possible in your network or you are using a malicious ISP, you are already in trouble.

                                                    I would rather see Chrome stop sending your requests to Google if it thinks it’s not a real hostname. Immense effort required to support FTP drains all their resources and keeps them from making this simple improvemen I guess.

                                                    1. 1

                                                      MitM attack that directs you to a wrong server. If it’s possible in your network or you are using a malicious ISP, you are already in trouble.

                                                      How so? (Assuming you mostly use services that have basic security, aka HTTPS.)

                                                      What you call “malicious ISP” can also be called “open wifi” and it’s a very common way for people to get online.

                                                      1. 1

                                                        The ISP must be sufficiently malicious to know exactly what are you going to download and setup a fake server with modified but plausibly looking versions of the files you want. An attacker with a laptop in an open wifi network doesn’t have resources to do that.

                                                        Package managers already have signature verification built-in, so the attack is limited to manual downloads. Even with resources to setup fake servers for a wide range of projects, one can wait a long time for the attack to succeed.

                                                1. 2

                                                  Related bug reports:

                                                  1. 14

                                                    Google can’t track you on FTP, and also AMP is not needed there =)

                                                    1. 0

                                                      pretty much anyone can track you on FTP, it’s an unencrypted protocol.

                                                      1. 5

                                                        Using cookies? I don’t think so. Unencrypted means ISP can track you, yes.

                                                        1. 3

                                                          I don’t think I can, since I (1) don’t work at your or the server’s ISP and (2) I’m not in the neighbourhood. Feel free to prove me wrong. ;)

                                                      1. 2
                                                        # modern configuration
                                                        ssl_protocols TLSv1.3;
                                                        

                                                        Am I the only one that thinks that these tools are really toxic? Folks will just copy-paste all of these things without realising that they’re precluding their users from being able to access the sites. There’s a good reason most real companies (Google included) are still happy to serve you over TLSv1.0. Mozilla markets such configuration as “Old”, with a note that it “should be used only as a last resort”. I guess Google is using a last resort. ¯\_(ツ)_/¯

                                                        1. 6

                                                          But it defaults to “Intermediate” and there are short explanations of each on the radio box. “Modern” does say “[…] and don’t need backward compatibility”.

                                                          1. 3

                                                            Which up-to-date browsers do not support TLS v1.3? Sure, you could run IE7 or FF 3.0, etc, but I’d want to do everything in my power to discourage folks who are running outdated browsers from using them to browse the web, including denying them access to any website(s) I am running.

                                                            Google has different motives: show ads to and collect info from everyone.

                                                            1. 3

                                                              It seems to be a common misconception that the internet’s sole reason of existence is now to deliver content to Firefox and Chrome. While this is perhaps true for some people - and may be true for you - it’s certainly not a base assumption you should operate on. There are still TLS libraries out there who don’t support TLSv1.3 (such as libressl) and thus there are tools which can’t yet use TLSv1.3. There is - as far as I’m aware - little need from a security POV to prefer TLSv1.3 over v1.2 if the server provides a secure configuration. If you want to discourage people from using old browsers, display some dialogue box on your website based on their user agent string or whatever.

                                                              Removing support for TLS versions prior to 1.2 is most certainly a good idea, but removing support for TLSv1.2 is just jumping the gun, especially if you look at the postfix configuration. If you want to enforce TLSv1.3 for your users, fine. But to enforce it when other mailservers try to deliver email is just asking for them to fall back to unencrypted traffic, effectively making the situation even worse.

                                                              On a completely unrelated note: It’s funny that server side cipher ordering is now seemingly discouraged in intermediate/modern configurations. I guess that’s probably because ever cipher supported is deemed “sufficiently secure”, but it’s still a funny detail considering all the tools that will berate you for not forcing server cipher order.

                                                              1. 1

                                                                Thanks for the reminder that some libraries (e.g. libressl) still do not support TLS v1.3. Since practically every browser I use (which extends beyond the chrome/FF combo) supports it, I hadn’t considered libraries like that.

                                                            2. 1

                                                              I was also surprised when I noticed this. I’d used this site before, but then “Modern” meant only supporting TLS 1.2+, which I think is suiting.

                                                            1. 3

                                                              I’m thinking of redirecting https://cipherli.st/ to the Mozilla generator. Did it once before to the wiki, but that was disliked.

                                                              1. 3

                                                                Ah, I love cipherli.st! Thanks so much for providing it, it’s been a handy reference on several occasions.

                                                                The Mozilla generator is good but cipherli.st is more comprehensive.

                                                                1. 2

                                                                  Quick question for an upcoming project: is there a rather canonical, maintained list of cypher considered “state of the art” that doesn’t come in the form of a webserver config around somewhere?

                                                                  1. 2

                                                                    No! cipherli.st is more comprehensive than Mozilla’s ssl-config, if includes more services (e.g. dovecot, etc)

                                                                    1. 2

                                                                      Disliked by whom? Want me to put you in touch with the author of the config generator?

                                                                      1. 1

                                                                        I like cipherli.st! I’d be sad if it just would be a redirect to the Mozilla generator.

                                                                      1. 6

                                                                        I don’t see the value of complaints against SPAs anymore. That ship has sailed about a decade ago! The reasons are about as complex as the reasons for the popularity of the web platform itself. It’s not the best solution, but nothing on the web ever is. Instead, it’s providing a particular set of tradeoffs such that many developers prefer it – even while taking user experience into consideration. For example, can anyone suggest how to make an old school web application that works offline?..

                                                                        (Although I’ll admit, I’m prone to criticising the web platform myself on occasion. The whole thing, including SPAs, is really a terrible kludge. Nonetheless, I only make SPAs and not old school web applications.)

                                                                        (Oh, and it’s also very useful to distinguish between web sites and web applications, but for some reasons these complaints rarely do.)

                                                                        1. 21

                                                                          I don’t see the value of complaints against SPAs anymore.

                                                                          I don’t see the value of this statement specifically. Even though SPAs are a step backward in a multitude of ways, it’s just how the world works now and we should all just accept that the ship has sailed?

                                                                          can anyone suggest how to make an old school web application that works offline?

                                                                          Look, I’m just trying to be able to still use the web on mobile when my connection is spotty. I think it’s a cool party trick that webapps can be cached entirely in service workers (or whatever) in order to make them work offline, but I’m just trying to read one article and then go about my day. But now, I’m sitting here on the metro waiting 20 seconds for a newspaper site to load.

                                                                          Does a content site (like a newspaper or blog) need to work offline? If not, why do people build them with stacks that cause drawbacks in other areas (such as, from TFA, being slower overall).

                                                                          (Oh, and it’s also very useful to distinguish between web sites and web applications, but for some reasons these complaints rarely do.)

                                                                          It’s because everything is an SPA now. Newspapers, shopping sites, blogs. I think it’s great that rich applications can be built now – I use a fair share of web applications day to day! Mail client, Trello, couple of others. I’ve been in the “SPA haters’ club” for a few years now, consuming plenty of content to confirm my bias, and I’ve never heard anybody say “I sure wish Trello worked with JS turned off.” I’ve experienced a lot of “why do I have a running service worker for this shopping site?” or “why does this page show up blank with JS disabled and then it turns out to just be a blog post with no images once I turn JS on and reload?”

                                                                          1. 4

                                                                            Well, so you see the value of SPAs, right? Your issue is that content sites use this architecture when they don’t need to, and do it at the expense of UX in some cases. OK, fine, but that’s not the same as saying “SPAs are useless, and we could do anything and everything by loading individual pages from the server”. Well, no, we can’t.

                                                                            So my problem is that the complaints like the OP are usually a vague handwavy appeal to how wonderfully simple and fast it is to load pages from the server, usually combined with pointing fingers at a content site like MDN, and without taking the trouble to address any of the use cases for which that approach doesn’t work.

                                                                            We shouldn’t just accept things as they are, but I think the specific ship that sailed is for complaints about SPAs vs plain old web pages in the context of web applications. There was a point when things could have gone a different way, but that was in 2004-2008, if memory serves. Now it would be far more productive to frame complaints in the modern context, where we have complex applications all over the web, which can’t be turned into simple web pages.

                                                                            I hope this clarifies things.

                                                                            1. 16

                                                                              Your comment appears to be replying to a point I didn’t make, and I’m frustrated by it, so I will reiterate.

                                                                              We’re not talking about Trello and Google Docs. We’re talking about Medium, or any Squarespace or Wix site (which don’t work without JS).

                                                                              There’s somebody like you in the comments of every article like this. “Don’t forget about web applications! You know, the ones that absolutely require client-side dynamism!”

                                                                              Nobody’s forgotten about them. It’s impossible to. They’re everywhere. Your use-case is handled already. But now, everybody wants to build SPAs by default, even when doing so doesn’t provide any benefit. That’s the problem.

                                                                              1. 2

                                                                                You’ve conveyed your frustration perfectly. I understand your point, I agree to a degree, and I think it’s fine to criticise this application of SPAs.

                                                                                I could still suggest some benefits of SPAs even for content sites on the development side, but I don’t want to exasperate you any further. Thanks for the discussion.

                                                                                1. 7

                                                                                  For many many years I’ve blocked cookies by default, and only opened exceptions for sites I wanted to use in ways that required a cookie.

                                                                                  In more recent years I’ve also been blocking local storage by default and opening the occasional exception for a site I want to use that has a justifiable reason for needing it. But a staggering percentage of content-oriented things just flat-out break if you deny local storage, because A) they’re actually massive JS applications and B) nobody involved in their development ever considered this case (while people do seem to at least recognize when you disallow cookies).

                                                                                  For example, the browser I’m in right now does not have an exception for GitLab, so even trying to view a README of a repository in GitLab’s UI will fail with an eternally-spinning “loading” indicator and shows TypeError: localStorage is null in the browser console.

                                                                                  1. 1

                                                                                    I guess you’ve considered it, but for this reason self-destructing cookies (aka Cookie AutoDelete for Firefox) is much better at preventing breakage with most of the benefits of compete blocking

                                                                                    Rather than broken sites, as a European I only have to contend with endlessly confirming GDPR notices for sites I’ve already visited (although thankfully there is also a plugin for that!)

                                                                            2. 2

                                                                              Does a content site (like a newspaper or blog) need to work offline?

                                                                              Would e-readers be better if they required an always-on internet connection? I think there’s a lot of value in not needing an internet connection just to read… (Although many offline SPAs are poorly-written, or loaded with slow trackers, ads, fullscreen modals asking for your email address, etc.)

                                                                              1. 3

                                                                                It’d be nice if I got to make the decision myself as to whether I want to read an article from the same source again before they cache their entire backlog for me. (slight hyperbole)

                                                                                Personally I believe that ATOM is still the best designed way of accessing articles offline, and advancements in that system would be much more beneficial than pushing SPAs. Things like encouraging sites to actually put the full article in, rather than just a link to the website I’m trying to avoid.

                                                                            3. 7

                                                                              Every old-school website I’ve ever used works just fine offline. “File” -> “Save As”.

                                                                              1. 2

                                                                                Web application, not site.

                                                                                1. 2

                                                                                  See my sentence above about the distinction between websites and web applications.

                                                                                  1. 5

                                                                                    For example, can anyone suggest how to make an old school web application that works offline

                                                                                    Maybe I’m showing my age here, but an ‘old school web application’ means flash, to me - and those overwhelmingly worked perfectly when you saved the page.

                                                                                    1. 2

                                                                                      So do you think we should go back to the good old days of Flash and Java applets? You’ll probably recall that Flash had a bad track record for security and energy efficiency. This critique by Steve Jobs is still pretty good (even if hypocritical).

                                                                                      I don’t recall that saved .swf files were able to maintain any state either. Were they?

                                                                                      1. 2

                                                                                        So do you think we should go back to the good old days of Flash and Java applets?

                                                                                        I know as well as you do how much of a dumpster fire flash security was. Java applets were… less disastrous, but I never saw one that wasn’t deeply unpleasant to use.

                                                                                        I don’t recall that saved .swf files were able to maintain any state either. Were they?

                                                                                        Very few applications maintain state after you close & reopen them. You could, though - flash applets could work with files on your computer (they had to use the flash builtin filepicker and only got access to files chosen through it).

                                                                                        1. 1

                                                                                          So in comparison to how things were with Flash and Java applets back then, don’t you think SPAs are an improvement? Sure, they might be overused from the user’s point of view, but that’s not the same as saying they can easily be dispensed with by reimplementing with server-side page rendering.

                                                                                          Re state: that also doesn’t seem like a great user experience in comparison to SPAs.

                                                                                          1. 2

                                                                                            You’ve put words in my mouth two comments in a row.

                                                                                            The way you have chosen to communicate does not reflect well upon you, and I’m not interested in engaging further.

                                                                                            1. 1

                                                                                              I’m sorry if it came across that way, that’s not how I meant it. I was just asking questions to understand what you think. The comment re “they can easily be dispensed with” was more an interpretation of the OP, which is what I thought we were talking about.

                                                                                      2. -1

                                                                                        It’s kinda sad that you read “old-school website” and your brain instantly auto-corrects it to “old-school web application”.

                                                                                        EDIT: I missed a message from the thread there.

                                                                                        1. 2

                                                                                          Did… did you read the thread, or just reply to one comment in isolation?

                                                                                          1. 0

                                                                                            Yep, I missed a message in the middle there. Sorry!

                                                                                            1. 1

                                                                                              No worries than :)

                                                                                1. 3

                                                                                  I wonder if PGP is salvageable.

                                                                                  SSL and CA PKI used to be pretty bad. Eventually the spec was cleaned up, CT logs and CAA were added, CA/B forum has shown teeth, some CAs were booted, Let’s Encrypt happened, and clients were forced to upgrade. It’s still not perfect, but it’s not ’90s crypto any more.

                                                                                  1. 3

                                                                                    I don’t really agree that the problems have been solved for SSL and CA PKI. The only thing that has happened for the CA infrastructure, is that everything has been centralized now, while PGP stays decentralized.

                                                                                    TLS requires X.509 certificates, which still use ASN.1, which is very much 80s technology. The handling of certificates or the actual encrypted stream is typically done by a library (god forbid you write your own) and those have to be backwards compatible because so many applications use them. OpenSSL is a popular library, and it’s really difficult to work with. Alternatives (such as GnuTLS) seem not to get so much traction. OpenSSL really feels like 90s technology to me.

                                                                                    CT logs and CAA are simply layers on existing infra, adding small amounts of security, at the cost of adding complexity to an already complex system. If I check CT logs, I’m telling Google about the websites I visit (leaking metadata). If I trust CAA, I’m trusting various governments and companies in less-than-ideal political climates to abide by CAA.

                                                                                    CA/B forum and the Mozilla CA Certificate Store are a centralized Web of Trust. Mozilla simply gives you a list of CAs that you should trust. If you were to implement something like this for PGP, simply provide a list of key IDs that your users should set to “Ultimate” trust.

                                                                                    Let’s Encrypt is just one CA, which now functions as the central gate-keeper to publishing your website to the internet. What would happen if lobste.rs suddenly couldn’t get a new certificate from Let’s Encrypt?

                                                                                    1. 3

                                                                                      What would happen if lobste.rs suddenly couldn’t get a new certificate from Let’s Encrypt?

                                                                                      They always can use different CA, there is bunch of them, maybe not so convenient, but there are other solutions possible. The problem with PGP is that it is really hard to upgrade to newer versions and deprecate old, pretty much in contrast to TLS which is being updated and removes parts that didn’t worked out.

                                                                                      TLS requires X.509 certificates, which still use ASN.1, which is very much 80s technology.

                                                                                      Pretty good technology. Yes it have gain bloat over years, but it is still pretty good idea and it’s concept is repeated by other technologies like ProtoBuffers or C’n’P. TBH I think that there should be initiative to provide ASN.2 which would remove the bloat while keeping good parts of the ASN.1.

                                                                                      1. 1

                                                                                        They always can use different CA, there is bunch of them, maybe not so convenient, but there are other solutions possible

                                                                                        I find most CAs I’ve worked with convenient enough, however only few of them are as free as in beer. Paid certificates typically don’t come cheap. If Let’s Encrypt suddenly were to reject you service, you’d be forced to choose between dropping HTTPS in favor of good ol’ HTTP, or taking out your wallet.

                                                                                        I think that there should be initiative to provide ASN.2 which would remove the bloat while keeping good parts of the ASN.1.

                                                                                        I think that the same thing can be said about PGP. Both are good enough for their job, but both of them are from the 80s-90s and have become bulky over the years, with different methods of accomplishing the same thing. In both cases, it would be an improvement to make a new version that basically does the same things as the old version, but does away with obscure and largely unused features.

                                                                                      2. 2

                                                                                        That’s an expansion of what I meant by “not perfect”. ASN is horrible, but doesn’t really impact day-to-day usage.

                                                                                        But TLS went from a buggy soup of MD5, RC4 and padding oracles to modern ciphers with forward secrecy. It went from tech almost no sites used (do you remember Gmail was over HTTP?) to 80% world-wide deployment.

                                                                                        So in the same vain, if PGP could switch people from GnuPG to Sequoia, drop the cruftiest hacks and force people to re-generate their keys and re-encrypt their data with this millennium’s crypto, then maybe PGP wouldn’t be so terrible?

                                                                                        1. 1

                                                                                          Sequoia

                                                                                          Do you have a link for that? All I found was the car and the music program

                                                                                          1. 1

                                                                                            https://sequoia-pgp.org

                                                                                            An implementation of PGP in Rust, which also drops some of the oldest cruft.

                                                                                    1. 2

                                                                                      This project seems not very much alive. https://github.com/ariya/phantomjs/issues/15344

                                                                                      1. 1

                                                                                        Tru. Am basically using it to grab a page’s post-JS-executed HTML source to then pipe into other utilities in BASH, which seems to work fine. Will see if it catches fire with more complicated scripts.

                                                                                        There’s also this:

                                                                                      1. 6

                                                                                        The whole article seems to focus on what to do as a business owner. The interesting question is what to do when you have to deal with an asshole, without having any authority. Or worse, when the asshole seems to be favoured by those who do (since assholes appear to outperform the non-assholes).

                                                                                        1. 3

                                                                                          Just report them to and encourage others to. I would always get side channel requests from people because the sysadmin was an insufferable asshole and I had equal access to the same stuff. I’d tell them the same thing I did. Bug management and hr until they do something about it. Worst case scenario you quit.

                                                                                          1. 3

                                                                                            That’s a very good point about having a similar job with the asshole. All the little favors and questions get asked of you since nobody wants to deal with the asshole, contributing towards the asshole’s perceived performance in the measured metrics.

                                                                                        1. 3

                                                                                          I hope this will give new life to Docker on FreeBSD, but we’d need bhyve support for that to work.

                                                                                          1. 6

                                                                                            Just use look(1).

                                                                                            % time look $(echo -n bla | sha1 | tr \[:lower:\] \[:upper:\]) pwned-passwords-1.0.txt
                                                                                            FFA6706FF2127A749973072756F83C532E43ED02
                                                                                            look $(echo -n bla | sha1 | tr \[:lower:\] \[:upper:\])   0.00s user 0.00s system 85% cpu 0.001 total
                                                                                            

                                                                                            It uses binary search so it’s fast (so make sure you have the file sorted by hash).

                                                                                            It’s a bit slower the first time around, but that might just be because of my storage.

                                                                                            % time look 5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8 pwned-passwords-1.0.txt
                                                                                            5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8
                                                                                            look 5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8 pwned-passwords-1.0.txt  0.00s user 0.00s system 0% cpu 0.574 total
                                                                                            
                                                                                            % time look 5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8 pwned-passwords-1.0.txt
                                                                                            5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8
                                                                                            look 5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8 pwned-passwords-1.0.txt  0.00s user 0.00s system 75% cpu 0.002 total
                                                                                            
                                                                                            1. 3

                                                                                              nice! The joys of Unix, you are always bound to find a neat new utility hiding away that solves your problem! I was not aware of look! much simpler than my solution, and mine isn’t very complicated either. Thanks for sharing!