Threads for nogweii

    1. 12

      This is really cool, and a great way to actually justify those crazy one-liners. I have a bunch of them in my own dotfiles I should break apart and explain, if for nothing else, to remember what I have!


      One big thing I want to point out: This only works for Github, and not anything else. What about Gitlab? Or even aliases for modules on different domains (like all of the Kubernetes libraries being on Github but the official package being k8s.io/kubernetes). The blog post / title suggests that it’ll go through all of the dependencies, but then specifically filters for only github.com.

      This particular concern of mine isn’t super important for this blog post, but it does implicitly reinforce a sentiment I’ve seen elsewhere on the internet: Github is the only place that code lives. Again, I don’t want to distract too much from the original post, but I feel like I need to push back on that sentiment so that we don’t forget it.

      1. 3

        Maybe github is the only place where unmaintained code lives.

      2. 2

        Instead the pressure comes from new ISPs, small hosting companies, new companies trying to live in datacenters, aka people nobody important cares about.

        Also home servers are slightly cheaper and easier to set up. IPv6 would reduce the barriers of entry to self-hosting, but there’s no big central figure that specifically cares about that.

      3. 1

        I think you’re right, but these are secondary, emergent, “network” (sorry) effects, consequences of marketing and commercial details, and not directly related to why IPv6 failed.

        They are true but they are effects and not causes.

        The reason it failed, this blog essay is arguing, is that IPv6 needed a translation/mapping layer and it didn’t have one. It was bigger and more complicated, and while it simplified some minor details, it did not provide a necessary abstraction layer and so was not able to do the big important simplification that was required.

        1. 2

          I think you’re right, but these are secondary, emergent, “network” (sorry) effects, consequences of marketing and commercial details, and not directly related to why IPv6 failed.

          How can they possibly be secondary? Apenwarr is the only person in the world that I’ve seen who seems to think there is no value in IPv6 if it doesn’t solve the issues he has. Everyone else cares about having a much bigger address space. Everyone who wants IPv6 wants more address space. Everyone who is slow at adopting it are those who either have no need for more addresses, or who straight up profit from address scarcity. Those tangible financial incentives seem more real than what Apenwarr talked about.

          1. 1

            How can they possibly be secondary?

            Did I not lay out my reasoning clearly enough? Genuine question.

            Apenwarr is the only person in the world that I’ve seen who seems to think there is no value in IPv6 if it doesn’t solve the issues he has

            If that were the case, then I think that IPv6 penetration would be above 45%. And yet, here we are, 24 years after it was introduced, and it still is not the dominant form of IP.

            Everyone else cares about […]

            Not enough for the majority of IP users to deploy it, apparently. So perhaps that is not enough.

        2. 1

          Correct me if I’m wrong, but aren’t computer networks where the term “network effect” comes from?

          1. 2

            The concept was used by Bell for phone networks first, technically, although I don’t know whether they specifically used the term “network effect” for it.

          2. 1

            At least according to Wikipedia, it comes from Bell at the turn of the 20th century, in an attempt to explain why they had a monopoly even though their patents on the telephone expired.

            https://en.wikipedia.org/wiki/Network_effect

    2. 1

      Don’t mean to be contrarian, but isn’t node kinda the wrong tool for the shell command job?

      Seems like Go, or anything that compiles to a static binary without dependencies, would be more suited for the task.

      1. 2

        For composing multiple other commands, I disagree on the use of Go or other compiled languages. Languages without a compilation step are best for orchestrating and combining multiple other commands, though the experience does suffer compared to classic shell scripts.

        For implementing the underlying commands you’re combining in a script, for sure, Go or other ones are excellent options.

        1. 2

          If the compilation step is fast enough, I don’t necessarily see this as a problem. The ability to deploy a static binary that will execute the whole process, without needing to install anything else (hello curl, jq, etc…) is attractive.

          But that is a completely other use case to what I had in mind.

      2. 2

        Well, shell would be the right tool for the shell command job :)

        As I said in another comment, it is a building block for another project: a CI/CD solution where you code your pipeline in Typescript, and not in a custom YAML DSL monstrosity. Like Jenkins and Groovy, but without Jenkins and Groovy.

        As is, this library targets people who are already using Node in their stack. If you don’t have Node, then you’re probably right, Go, Rust, Zig, … might be a better solution.

        To each their own taste.

        1. 2

          I think that none of the current languages solve the set of problems for this kind of “glue multiple shell command” well enough.

          I agree that you probably want to be able to run in “interpreted script” mode, but also being able to generate a quick reusable binary is nice. (And tbf with enough work on tooling, as mentioned above, can be made to look the same)

          The problem is more that shell languages are uh… Not really user friendly. And most higher level scripting language like python, node or ruby do not really handle dependencies and safety the way you would want at this level.

          Anyway. Great library, i am happy to see these ideas applied in more settings. These are great inspiration and references for me when I explore the problem space. So thank you so much.

        2. 1

          not in a custom YAML DSL monstrosity

          Yes, please!

          To each their own taste.

          Agreed! My initial impression was that Node was quite a heavy dependency for a shell command tool, but thank you for the extra context.

    3. 3

      This is really handy. I also like zx for writing glue scripts, but it does pull in a ton of dependencies. (Common ones that I’d want when making a CLI thing, but still.)

      1. 1

        Thanks for the feedback. zx might be more powerful, I haven’t checked all the edge cases for my library.

        For the why of this library (I might write an article about this on my paywalled blog on medium), it’s in fact the first building block of a larger project, which I’m not sure yet whether I’ll start it or not.

        I’ve always hated modern CI/CD relying on YAML with programming language constructs (loops, conditions, modules, …). This is a terrible experience. I feel like we can have something better to write the code of our pipelines, similar to Groovy for Jenkins but without the bloat (very opinionated and subjective statement).

        But the most important element of a CI/CD pipeline is the ability to run shell commands, so before even starting this big and ambitious project, let’s start small with a shell out library :)

    4. 4

      I’ve just done a test on macOS and Safari came in the lead, then Chrome, then Firefox.

      Safari: 317 Chrome: 307 Firefox: 266

      Done on speedometer. Am I missing something here?

      1. 10

        Two big differences: this is running on Windows, and I think its running the latest commit from mozilla-central. If you’re testing release versions, you won’t see these performance improvements for months.

        1. 7

          Two months, to be precise.

        2. 1

          Gotcha, makes sense. Thanks :)

    5. 1

      I mean… if not that model, then what?

      1. 1

        As the author mentioned, if you’re writing about a tool, the model works. But that’s not the only kind of prose in the world, of course.

        I think part of the answer will always be “do it yourself”, as I’m not aware of what a truly universal documentation scheme would be. This might be a good founding principle, like the college essay structure comparison, but it will almost certainly need to be tweaked or substituted on a case-by-case basis.

    6. 18

      The unfortunate thing with these sort of malware detectors is that they operate with an expected false-positive rate. The industry generally thinks that’s fine, because its better to hit more malware than to miss any.

      That only works if the malware decector authors are receptive to feedback, though.

      1. 7

        Basically every new release of Rust on Windows is detected as malware by one vendor or another.

        I guess it probably doesn’t help that the binaries aren’t yet signed.

        1. 9

          CrowdStrike Falcon on macOS and SentinelOne on Windows have cost me so much wasted time as an employee of companies that uses them. Falcon routinely kills make, autoconf, etc. SentinelOne does the same thing when using msys2 or cygwin on Windows.

          At least SentinelOne tells me, Falcon tries its best to leave zero useful information on the the host. When processes start randomly being terminated it takes a bit of effort to find out what the hell is actually happening. Often I realized it was Falcon only after some poor desktop security tech gets assigned a ticket and they reach out to me with a lot of confusion around some crazy complex command lines being sent through the various exec calls.

          Because of the high frequency in which I encounter the issue, if something randomly fails in a way I don’t expect I immediately suspect Falcon.

        2. 4

          My understanding is that signed binaries don’t help - if a binary is rarely run (because it’s just been released, or it’s just not a mainstream tool) there’s a good chance it’ll be detected as malicious no matter what.

          1. 5

            I believe it depends on the kind of signature. Last time I checked, companies can buy more expensive certificates which are privileged insofar that the binaries don’t need to be run on a lot of machines to be considered safe.

      2. 5

        better to hit more malware than to miss any.

        And still miss a lot of malware. ;)

        I wonder what would be the best way to find out what the real rate of false positives is.

      3. 3

        I’ve worked at a company where the IT was so terrible that their actions bordered on sabotage. They caused more damage and outages than actual hackers would. The anti-virus would delete our toolchains and fill the disk with ominous files until there was no space left. Luckily, they left our Linux machines alone, so we put everything we cared about on Linux (without GUI) and hoped that their lack of know-how would prevent them from messing with those machines. It worked.

    7. 1

      The link doesn’t work for me, with Firefox on Android. I keep getting redirected to the main page of the repo. Did they disable the wiki?

    8. 2

      I wonder if using Terraform’s CDK for these submodules would help. As I understand it, you can still use a CDK module from an HCL parent. I’ve been trying to find time to explore this, as using count for optional resources has always been a terribly annoying hack.

    9. 5

      Does anyone know how this will impact the TLDs for which Google runs the registry? According to this FAQ Google makes 17 available and owns 6 more, “among others”.

      According to that same FAQ the registry is a separate “wholly-owned subsidiary” (as required by ICANN). So I guess it’s fine? But it would still be nice to have an authoritative answer. The support center explainer about the sale says nothing about it.

      1. 3

        Given that Charleston Road Registry is a distinct and separate entity from Google Domains, it’s unrelated to this sale. So these gTLDs that Google manages (including .google!) are unaffected.

    10. 21

      Woah. This is something that is incredibly sketchy, and I think warrants an investigation by Mozilla and Chrome. My knee-jerk reaction is complete revocation of the intermediate cert that HICA is using and a full explanation.

      How does one bring this to Mozilla’s and Chrome’s root programs’ attention?

      1. 19

        I am not involved with the Mozilla root program, but I made sure the team is aware.

      2. 2

        https://wiki.mozilla.org/CA has an ‘information for the public’ with a link to ‘report an incident to Mozilla’ which takes you to a bugzilla project.

    11. 4

      I would actually posit that this problem exists and is potentially more insidious in managers and senior leaders that do code but aren’t actively involved in the development of the problem solution at hand. In those cases they do know what the wave-toppy implementation could be but will underestimate the actual difficulty or unknown unknowns that crop up in the physical implementation. This can lead to misalignment and not understanding why things aren’t going as smoothly as the wave top version suggested.

      1. 6

        Aside: Wave top? I don’t remember seeing the expression before, what does it mean exactly? Where does it originate? If you don’t mind me asking.

        1. 2

          I’m actually not sure where it originated? It means the high level details, or birds eye view of something.

          In my comment above it would be like the senior engineer saying, “it just requires a few API calls, zipping the results, and them mapping to our domain value; how hard could it be?” While at a very high level that may be true, it glosses over any details of the actual problem being solved and how an implementation to do that would have to be built.

          It’s difficult without an actual example, but I’ve seen this pattern often enough that I can’t be the only one to have noticed it.

          1. 3

            that is the beginning and bane of all of my personal projects, ha. “How hard could it be?” - well, shit.

            I’ve definitely applied that in professional contexts, though with more success than failure. But I definitely can see me getting it increasingly wrong. I’ve been recently (~8 months) promoted to a Staff title and don’t get to program as much as I used to. Managers get even less time, but still want to be part of the decision making process.

            1. 6

              OTOH, I have witnessed a friend of mine tearing a government agency a new one couple of years back when he was still a member of Parliament.

              The agency was supposed to implement an online solution to pay the highway fee. You enter the plate number, pay with credit card and for a given amount of time, the police has the plate.paid = true somewhere in a shared table and won’t bother you.

              They have spent an inordinate amount of money on it. Something like 20 million USD for max 10 million users who buy at most one per year. And failed to deliver.

              The friend has pointed out the fact, in the discussion surrounding it with people advocating for the agency, that they have “failed to implement an eshop where you don’t even have to deliver anything”. “A small software company would probably ask for less than 100k USD. I have personally worked on such shops. What are you even saying?”

              So… Keep you skepticism. Don’t be too handwavy, but when it feels like something doesn’t add up, someone might very well be bullshitting you.

    12. 3

      I appreciate the honesty involved with the possible future of the project:

      Future work/contributing

      • I’m not going be working on/maintaining vmdiff for at least 12 months, maybe ever
      • I’d love for someone to steal this genius idea, either forking the prototype, or making their own
    13. 39

      I don’t have a solution for this, but suggesting GitHub as an alternative to now-hostile Docker seems to move the issue from one silo to the next.

      1. 7

        Decentralization is not going to happen, at least not as long as the decentralization is being pushed by the kinds of people who comment in favor of decentralization on tech forums.

        Decentralization means that each producer of a base image chooses their own way of distributing it. So the base image for your “compile” container might be someone who insists on IPFS as the one true way to decentralize, while the base image for your “deploy and run” container is someone who insists on BitTorrent, and the base image for your other service is someone who insists on a self-hosted instance of some registry software, and…

        Well, who has time to look up and keep track of all that stuff and wire up all the unique pipelines for all the different options?

        So people are going to centralize, again, likely on GitHub. At best they’ll start having to specify the registry/namespace and it’ll be a mix of most things on GitHub and a few in other registries like Quay or an Amazon-run public registry.

        This is the same reason why git, despite being a DVCS, rapidly centralized anyway. Having to keep track of dozens of different self-hosted “decentralized” instances and maintain whatever account/credentials/mailing list subscriptions are needed to interact with them is an absolute pain. Only needing to have a single GitHub account (or at most a GitHub account and a GitLab account, though I don’t recall the last time I needed to use my GitLab account) is such a vast user-experience improvement that people happily take on the risks of centralization.

        Maybe one day someone will come along and build a nice, simple, unified user interface that covers up the complexity of a truly decentralized system, and then we’ll all switch to it. But none of the people who care about decentralization on tech forums today seem to have the ability to build one and most don’t seem to care about the lack of one. So “true” decentralized services will mostly go on being a thing that only a small circle of people on tech forums use.

        (even Mastodon, which has gained a lot of use lately, suffers from the “pick a server” onboarding issue, and many of the people coming from Twitter have just picked the biggest/best-known Mastodon server and called it a day, thus effectively re-centralizing even though the underlying protocol and software are decentralized)

        1. 4

          Decentralization means that each producer of a base image chooses their own way of distributing it.

          I imagine that image producers could also agree on a one distribution mechanism that doesn’t have to rely on a handful of centralized services. It doesn’t have to be a mix of incompatible file transfer protocols either, that would be really impractical.

          This is the same reason why git, despite being a DVCS, rapidly centralized anyway.

          The main reason was (is) probably convenience yes, but I think that Git has a different story: I may be wrong, but I don’t think that Docker was about decentralizing anything, ever. I would rather compare Git and GitHub’s relation to that of SMTP and Gmail.

          Maybe one day someone will come along and build a nice, simple, unified user interface that covers up the complexity of a truly decentralized system, and then we’ll all switch to it.

          Maybe, that would be convenient. I may be a bit tired, or read too much from your response, but I feel that you’re annoyed when someone points out that more centralization isn’t the best solution to centralization issues.

          I fear that, because Docker sold us the idea that there’s only their own Dockerfile format, that builds on images that must be hosted on Docker Hub, we didn’t think about alternatives – well, until they added “and now you must pay to keep using things that we offered for free”. Let’s not discard all discussions on the topic of decentralization too quickly, as we could improve on Docker, and we need more ideas.

          1. 5

            I imagine that image producers could also agree on a one distribution mechanism that doesn’t have to rely on a handful of centralized services. It doesn’t have to be a mix of incompatible file transfer protocols either, that would be really impractical.

            In the various threads about Docker, people are already proposing all sorts of incompatible transfer protocols and hosting mechanisms, and displaying no interest in cooperating with each other on developing a unified standard.

            The best I think we can hope for is that we get a duopoly of popular container registries, so that tooling has to accommodate the idea that there is no single “default” (as happens currently with Docker treating their own registry as the default). But I think it’s more likely that network effects and cohesiveness of user experience will push most people onto a single place, likely GitHub.

            The main reason was (is) probably convenience yes, but I think that Git has a different story: I may be wrong, but I don’t think that Docker was about decentralizing anything, ever.

            My point was to say “look, this thing that was explicitly designed to be decentralized, and is in a category of tools that literally have the word ‘decentralized’ in the name, still ended up centralized, that does not give high confidence that trying to decentralize Docker registries, which were not even designed with decentralization in mind, will succeed in a meaningful way”.

            Let’s not discard all discussions on the topic of decentralization too quickly, as we could improve on Docker, and we need more ideas.

            I will just be brutally honest here: the success rate of truly “decentralized” systems/services in the real world is incredibly low. Partly this is because they are primarily of interest to tech people who are willing to put up with a tool that is metaphorically covered in poisoned razor blades if it suits some theoretical ideal they have in mind, and as a result the user experience of decentralized systems/services tends to be absolutely abysmal. Partly this is because social factors end up concentrating and centralizing usage anyway, and this is a problem that is hard/impossible to solve at the technical level, but most people who claim to want decentralized systems are only operating on the technical design.

            So I do discard discussions on decentralization, and quickly. Perhaps this time someone really will come up with a decentralized solution that gets mass adoption and manages to stay de facto decentralized even after the mass adoption occurs. But to me that statement is like “I should buy a lottery ticket, because perhaps this time I will win”.

            1. 1

              Well, what can I say. These are some strong opinions, probably soured from various experiences. I’m not trying to convince you in particular, but do hope that we can aim higher than a duopoly. Thanks for the chat. :)

        2. 2

          This is the same reason why git, despite being a DVCS, rapidly centralized anyway. Having to keep track of dozens of different self-hosted “decentralized” instances and maintain whatever account/credentials/mailing list subscriptions are needed to interact with them is an absolute pain

          GitHub has benefits from centralisation because, as a publisher of an open-source project, I want it to be easy for people to file bugs and contribute code. Most people who might do this have a GitHub account and so anything either hosted on GitHub or hosted somewhere with sign-in with GitHub makes this easy.

          I’m not sure that this applies to container images. People contributing to the code that goes into a container or raising issues against that code will still go to GitHub (or similar) not DockerHub. People interact with DockerHub largely via {docker,buildah} pull, or FROM lines in Dockerfiles. These things just take a registry name and path. As a consumer of your image, it makes absolutely no difference to me what the bit before the first / in your image name is. If it’s docker.io, quay.io, azurecr.io, or whatever, the tooling works in precisely the same way.

          The only place where it makes a difference is in private registries, where ideally I want to be able to log in with some credentials that I have (and, even more ideally, I want it to be easy to grab the image in CI).

          I see container registries as having more in common with web sites than code forges. There are some incentives towards centralisation (lower costs, outsourced management), but they’re not driven by network effects. I might host my web site using GitHub Pages so that GitHub gets to pay for bandwidth, but visitors find it via search engines and don’t care where it’s hosted, especially if I use my own domain for it.

      2. 2

        Indeed. You need a place that has offers large amounts of storage and bandwidth for low cost, if not free entirely. And bandwidth has had a ridiculous premium for a very long time, which makes it very hard to find such a service.

        You could find VPS providers at a reasonable rate for both of these, but now you’re managing a server on top of that. I’m not opposed to that effort, but that is not a common sentiment. 😅

        1. 16

          Time for a project that combines Docker with Bittorrent.

        2. 1

          A shared redirector should need less space/bandwidth than actual independent hosting, but backend-hopping would become easier… And the projects themselves (who don’t want to annoy the backends too much) don’t even need to acknowledge it aloud.

    14. 13

      Or if it is https but a connection attempt to the endpoint fails with a TLS error

      This I agree with rejecting. Broken links are broken links, be it a 404 or a TLS error.

      If the link is http

      This I disagree with, wholeheartedly. Even with my history working at ISRG on Let’s Encrypt, we still found people who did not want to have an HTTPS server. Period. To forcefully ignore that segment of the internet would be foolhardy in my opinion.


      Perhaps an alternative could be “automatically replacing HTTP links with HTTPS if it succeeds”. If someone pastes an HTTP link to a site that supports HTTPS, automatically upgrading the link would be nice.

      1. 2

        we still found people who did not want to have an HTTPS server. Period.

        That’s bizarre and inexplicable, but to each their own.

        To forcefully ignore that segment of the internet would be foolhardy in my opinion.

        Lobsters has no obligation to cater to them and their bizarre insecure decisions either. I am not proposing to force them off the internet, just to not give them link juice from here.

        EDIT: I’m not even proposing anything that wild here, Chrome literally already does this, e.g. go to the recent submission https://lobste.rs/s/7lpwis/lisa_source_code_understanding_clascal and click through to the link. Chrome shows an error:

        The connection to eschatologist.net is not secure You are seeing this warning because this site does not support HTTPS. Learn more

        And they heavily encourage you to not go to that page anyway. At which point most reasonable people should turn back.

        1. 9

          we still found people who did not want to have an HTTPS server. Period.

          That’s bizarre and inexplicable, but to each their own.

          What’s inexplicable about not wanting to manage the extra security risk for the server?

          And, if we care about client security, we should have a tag page-requires-js with a penalty like rant

          1. 2

            What’s inexplicable about not wanting to manage the extra security risk

            Do you really consider setting up a basic TLS termination reverse proxy more of a security risk than serving insecure web pages? In that case you are in disagreement with more than half the web, and I don’t see any point in discussing further.

            1. 7

              For the server, a basic TLS termination server is surely more risk. Heartbleed-class things are less likely with an HTTP server written in a memory-safe language twenty five years ago and getting only bugfixes, not huge mandatory (because TLS versions need to change) changes, since then.

              Actually, from a perspective of websites as a population (not from the point of view of configuring a single one), an average HTTP website submitted to Lobste.rs is safer to read than an average HTTPS website. Because an HTTP site will almost surely be old or at least old-style enough to be readable without enabling scripts and without even enabling images (pure-HTML attacks are not that widespread even when someone bothers to intercept), and an HTTPS website has a non-negligible chance of making text Javascript-only and serving Google ads (which are known to let exploits slip from time to time).

              1. 1

                Interesting, I’ve never heard anyone argue with a straight face before that HTTP sites are actually more secure than HTTPS sites. This is a new one to me, and I’m sure to everyone who has been pushing for HTTPS for more than a decade now. If all the existing arguments for a secure web won’t convince you, then certainly neither will I.

                1. 1

                  They are more secure for the server side, and if Heartbleed has not convinced you that TLS adds risks for the server…

                  1. 1

                    I mean this is like conducting a survey to see what 10 dentists think of your toothpaste, finding out that only 3 of them recommend it, then claiming that ‘3 out of 5 dentists recommend our toothpaste’. If you disregard all the other security benefits and look narrowly at only the buffer overflow issue of C-based TLS systems, then sure you can claim that it’s unsafe. You’d also have to ignore options like the Caddy web server, which are written in Go and don’t suffer from C’s memory unsafety issues: https://caddyserver.com/

                    Written in Go, Caddy offers greater memory safety than servers written in C. A hardened TLS stack powered by the Go standard library serves a significant portion of all Internet traffic.

        2. 5

          Lobsters has no obligation to cater to them and their bizarre insecure decisions either

          Why is it bizarre? Why not support HTTP links? Pushing folks on way or another will create stubbornness.

          Another way of asking it: Why are unencrypted sites not worthy of “link juice” from Lobsters or anywhere else? You seem to have a philosophy about the value of HTTPS that seems slightly incompatible with other folks. I bet you & I broadly agree on many things related to HTTP vs HTTPS, but this aspect is something I don’t understand yet.

          1. 2

            Why is it bizarre?

            What else would you call actively keeping an open vulnerability in the way your website works?

            Why not support HTTP links?

            Because in the modern web we should want to encourage security and privacy as a first-class requirement instead of an afterthought? Why does Chrome heavily discourage us from visiting http-only links?

            Pushing folks on way or another will create stubbornness.

            There’s nothing we can do about people who insist on being insecure and unsafe, we just have to move past them.

            Another way of asking it: Why are unencrypted sites not worthy of “link juice” from Lobsters

            Firstly, because modern browsers will heavily discourage you from visiting those links anyway. So by allowing these submissions we are basically saying ‘We know this is insecure but we don’t care, it’s up to you’. Secondly, Lobsters already filters out content–just check the moderation log. Tons of stories get rejected as off-topic, spam, or scams. If Lobsters is already filtering out content that can potentially annoy or harm its users, automatically filtering out insecure sites is a simple and reasonable step.

            or anywhere else?

            I didn’t say ‘anywhere else’, I am speaking only about Lobsters here. While the same argument may apply to other cases as well, I would judge that on a case-by-case basis rather than a blanket judgment.

            1. 2

              Why is it bizarre?

              What else would you call actively keeping an open vulnerability in the way your website works?

              I suppose it’s possible that some people who have websites actively choose not to use HTTPS, such as by replying “please don’t” to an email message from a managed-hosting provider saying “Your website will be upgraded automatically to HTTPS unless you opt out within the next 30 days.”

              Still, I suspect it’s more common for people who have websites that don’t support HTTPS not to be choosing actively not to support it — maybe they set the website up in the 1990s or otherwise before Let’s Encrypt; maybe they don’t understand what HTTPS is or why it would be useful.

              Now, I imagine a person who has a website that would be linked from Lobsters is more likely to know what HTTPS is and value it, but they might have a website set up before Let’s Encrypt (or before they understood HTTPS) that they don’t actively maintain; they might have forgotten the website exists; they might be missing or dead.

              1. 4

                Or maybe they want their website to be cacheable for people on slow connections.

              2. 2

                I was replying to a person who said that some people actively refused to set up HTTPS on their sites.

                In the case that a site doesn’t have HTTPS because it’s not maintained–well, the security risk speaks for itself. It’s an unmaintained site, could be taken over by all sorts of malware.

                In the case that the site creator actively refuses to use HTTPS–well, the security risk speaks for itself again. The creator thinks they know better than security practices that have been the norm for more than a decade. You can tell where that will lead.

                1. 2

                  I was replying to a person who said that some people actively refused

                  Ah, yes, I had forgotten that context by the time I wrote my comment, for which mistake I apologize.

                  It’s an unmaintained site, could be taken over by all sorts of malware.

                  I suppose it’s true that the httpd or OS could have a vulnerability that could allow overwriting the website content. I wonder how commonly such attacks succeed in practice for static websites. (On the other hand, if it’s an unmaintained WordPress instance….)

                  In the case that the site creator actively refuses to use HTTPS–well, the security risk speaks for itself again. The creator thinks they know better than security practices that have been the norm for more than a decade. You can tell where that will lead.

                  I don’t think it logically follows from “the site creator actively refuses to use HTTPS” that “[t]he creator thinks they know better”. The creator could accept that HTTPS would be an improvement but still decide that they lack the competency and/or time to support it.

        3. 2

          And yet, I’ll be the judge of whether I go to it or not. If you’re really so paranoid about this, maybe a warning to say no https available. Blocking outright for legitimate sites is pretty meh.

      2. 2

        Or if it is https but a connection attempt to the endpoint fails with a TLS error

        … Broken links are broken links, be it a 404 or a TLS error.

        TLS error links aren’t the same kind of broken links as 404 links. Generally, in the event of a TLS error, a client that doesn’t care about security can ignore the error [*] and view the content. On the other hand, generally, in the event of a 404, a client can’t view the content no matter what—the server just isn’t showing the content at all.

        So, if the grounds for rejecting TLS error links is based solely on this particular argument (i.e. that they’re the same as 404 links), then I don’t agree with rejecting TLS error links.

        I agree with the rest of the comment: plain HTTP links should not be rejected.

        [*] by clicking through the security warning in web browsers, by using the -k flag in curl, etc.

    15. 12

      Another vote against. I’ve seen at least one site that served different content on 443 and 80. How do you know the https connection is the right one?

      1. 4

        Huh, I haven’t actually encountered any sites that were serving different content on 443 and 80, but always theorized it would happen. In my comment I suggested a naive approach of always replacing HTTP links with HTTPS, but that idea runs straight into this.

        Perhaps making an async call to Lobsters’ backend when submitting a story to validate various things and presenting an option to the submitter. i.e. something like “Hey, before you submit this story, we noticed it also worked on HTTPS. <Click here to update the URL.>”

      2. 4

        How can you tell the difference between a site serving different content on port 443 and 80 and a MITM attack?

        1. 4

          Easily (in some specific cases): 443 content is the hosting server’s global homepage (apparently the SNI was not configured when it became an option), and 80 content is the content that the creator sometimes references in detail over other channels.

      3. 3

        Because at least you know that the https content is very, very unlikely to have been tampered with by a third party. With the http content, you have literally zero guarantees.

        1. 1

          If content being tampered with is your problem, I’m looking forward to your suggestion to reject sites that use third party Javascript, or that use Cloudflare.

          I trust my ISP not to tamper with my traffic, even if it’s HTTP. I don’t trust Cloudflare not to tamper with content, even if it’s served from (their) HTTPS enabled webserver.

          1. 1

            You may trust your ISP, but do you trust every endpoint on the internet that routes your traffic between your computer and the remote host? I’m just curious, the security, validity, and other reasons for using HTTPS on the internet have been well known and publicized for more than a decade now. Are you seriously arguing that all of these arguments are invalid and HTTP is perfectly OK?

            1. 3

              but do you trust every endpoint on the internet that routes your traffic between your computer and the remote host?

              No, I don’t. But since I cannot verify the TLS connection between Cloudflare and the actual server, I have no choice but to trust that. I can only verify that the connection between me and the MITM (Cloudflare) is secure.

              • With HTTP, someone close to the server may MITM, but my browser warns me about the possibility so I can be cautious.
              • With Cloudflare, someone close to the server may MITM if TLS is not used between Cloudflare and the server, but I can not find out if that’s the case. My browser won’t warn me about the possibility and says everything is fine.

              You seem to argue for blocking the first scenario, but the second scenario is fine for you, even though the risk is the same; the connection close to the server may be unencrypted.

              1. 1

                With Cloudflare, someone close to the server may MITM if TLS is not used between Cloudflare and the server

                Sure, or they may not be able to, if TLS is used as strongly recommended by Cloudflare for years now: https://blog.cloudflare.com/cloudflare-ca-encryption-origin/

                but the second scenario is fine for you,

                No, it’s not. Don’t put words in my mouth, please. Just because I am advocating for a security measure doesn’t mean I blindly believe it guarantees total protection. My point is that something is better than nothing. This should not be a surprising argument. We make the same case for many safety and security measures in computing, from static typechecking to linting to unit tests.

    16. 14

      For those who are not in the know and are otherwise likely to skip over due to a too-quick reading of the title, here’s some additional context (included in the link itself, but summarized here):

      • This is not about Wikipedia being funded
      • Abstract Wikipedia is a new project
      • It’s an ambitious one, with highly technical requirements
      • These technical requirements are risky, per a report

      I didn’t even know about Abstract Wikipedia, and appreciate it’s vision. Not sure if it’s really that useful, though. And the very real technical concerns suggest it’s, at best, ahead of it’s time. At worst, an unfortunate example of the complexity of representing the sum of human knowledge and experience in code.

      1. 1

        As somebody who created a bunch of interlingual pages for English WP, I would have loved to have some sort of abstraction for managing them. I seem to recall working on a bunch of pages for individual Japanese kana, for example, and you can see for yourself that those pages have many similar parts which could be factored out using some abstractive system.

    17. 3

      I wonder if the version numbers above 120+ will un-freeze that part of the User-Agent.

    18. 2

      This is an interesting resolution to a open source software case. No monetary compensation, even for legal fees. Instead, a disclaimer must be added everywhere mentioning that the products (Houdini 6 and Fat Fritz 2) are derived from open source software and that they are not allowed to distribute them or any other derivatives for a year. Futhermore, they must hire a “Free Software Compliance Officer”.

      Quite a different result than previous lawsuits I’ve heard about. It seems that the Stockfish authors care more about the recognition and continued sanctity of the license rather than punishing ChessBase.

      1. 2

        indeed, that’s why I posted it here despite not being sure if it was relevant and confirmed by the initial downvote :)

        I used to hate the copyleft licenses because of their viral nature but having seen companies leeching the work of open-source developers, I’ve come around to using MPL 2 for libraries and GPL 2 for applications.

    19. 31

      Regardless of how someone feels about these changes, they seem to be well implemented and alternatives readily provided through the use of standard formats. It’s nice to see these sorts of changes being communicated clearly and with plenty of time.

      1. 30

        I especially like the “and if you don’t like it, here’s how you can take all your data with you when you go”

        1. 14

          This kind of grown-up attitude & approach is alone sufficient in significantly raising my interest in the platform.

          1. 4

            It’s a really nice platform. I use it exclusively for personal projects now, and I’m loving it. I haven’t done much collaboration on the platform, so I can’t say much about that, but otherwise it’s great.

            I know Drew kind of built a reputation for himself, and think what you want of him, but he’s doing right by FOSS with Sourcehut, I feel.