Threads for nogweii

  1. 4

    I would actually posit that this problem exists and is potentially more insidious in managers and senior leaders that do code but aren’t actively involved in the development of the problem solution at hand. In those cases they do know what the wave-toppy implementation could be but will underestimate the actual difficulty or unknown unknowns that crop up in the physical implementation. This can lead to misalignment and not understanding why things aren’t going as smoothly as the wave top version suggested.

    1. 6

      Aside: Wave top? I don’t remember seeing the expression before, what does it mean exactly? Where does it originate? If you don’t mind me asking.

      1. 2

        I’m actually not sure where it originated? It means the high level details, or birds eye view of something.

        In my comment above it would be like the senior engineer saying, “it just requires a few API calls, zipping the results, and them mapping to our domain value; how hard could it be?” While at a very high level that may be true, it glosses over any details of the actual problem being solved and how an implementation to do that would have to be built.

        It’s difficult without an actual example, but I’ve seen this pattern often enough that I can’t be the only one to have noticed it.

        1. 3

          that is the beginning and bane of all of my personal projects, ha. “How hard could it be?” - well, shit.

          I’ve definitely applied that in professional contexts, though with more success than failure. But I definitely can see me getting it increasingly wrong. I’ve been recently (~8 months) promoted to a Staff title and don’t get to program as much as I used to. Managers get even less time, but still want to be part of the decision making process.

          1. 5

            OTOH, I have witnessed a friend of mine tearing a government agency a new one couple of years back when he was still a member of Parliament.

            The agency was supposed to implement an online solution to pay the highway fee. You enter the plate number, pay with credit card and for a given amount of time, the police has the plate.paid = true somewhere in a shared table and won’t bother you.

            They have spent an inordinate amount of money on it. Something like 20 million USD for max 10 million users who buy at most one per year. And failed to deliver.

            The friend has pointed out the fact, in the discussion surrounding it with people advocating for the agency, that they have “failed to implement an eshop where you don’t even have to deliver anything”. “A small software company would probably ask for less than 100k USD. I have personally worked on such shops. What are you even saying?”

            So… Keep you skepticism. Don’t be too handwavy, but when it feels like something doesn’t add up, someone might very well be bullshitting you.

    1. 3

      I appreciate the honesty involved with the possible future of the project:

      Future work/contributing

      • I’m not going be working on/maintaining vmdiff for at least 12 months, maybe ever
      • I’d love for someone to steal this genius idea, either forking the prototype, or making their own
      1. 39

        I don’t have a solution for this, but suggesting GitHub as an alternative to now-hostile Docker seems to move the issue from one silo to the next.

        1. 7

          Decentralization is not going to happen, at least not as long as the decentralization is being pushed by the kinds of people who comment in favor of decentralization on tech forums.

          Decentralization means that each producer of a base image chooses their own way of distributing it. So the base image for your “compile” container might be someone who insists on IPFS as the one true way to decentralize, while the base image for your “deploy and run” container is someone who insists on BitTorrent, and the base image for your other service is someone who insists on a self-hosted instance of some registry software, and…

          Well, who has time to look up and keep track of all that stuff and wire up all the unique pipelines for all the different options?

          So people are going to centralize, again, likely on GitHub. At best they’ll start having to specify the registry/namespace and it’ll be a mix of most things on GitHub and a few in other registries like Quay or an Amazon-run public registry.

          This is the same reason why git, despite being a DVCS, rapidly centralized anyway. Having to keep track of dozens of different self-hosted “decentralized” instances and maintain whatever account/credentials/mailing list subscriptions are needed to interact with them is an absolute pain. Only needing to have a single GitHub account (or at most a GitHub account and a GitLab account, though I don’t recall the last time I needed to use my GitLab account) is such a vast user-experience improvement that people happily take on the risks of centralization.

          Maybe one day someone will come along and build a nice, simple, unified user interface that covers up the complexity of a truly decentralized system, and then we’ll all switch to it. But none of the people who care about decentralization on tech forums today seem to have the ability to build one and most don’t seem to care about the lack of one. So “true” decentralized services will mostly go on being a thing that only a small circle of people on tech forums use.

          (even Mastodon, which has gained a lot of use lately, suffers from the “pick a server” onboarding issue, and many of the people coming from Twitter have just picked the biggest/best-known Mastodon server and called it a day, thus effectively re-centralizing even though the underlying protocol and software are decentralized)

          1. 4

            Decentralization means that each producer of a base image chooses their own way of distributing it.

            I imagine that image producers could also agree on a one distribution mechanism that doesn’t have to rely on a handful of centralized services. It doesn’t have to be a mix of incompatible file transfer protocols either, that would be really impractical.

            This is the same reason why git, despite being a DVCS, rapidly centralized anyway.

            The main reason was (is) probably convenience yes, but I think that Git has a different story: I may be wrong, but I don’t think that Docker was about decentralizing anything, ever. I would rather compare Git and GitHub’s relation to that of SMTP and Gmail.

            Maybe one day someone will come along and build a nice, simple, unified user interface that covers up the complexity of a truly decentralized system, and then we’ll all switch to it.

            Maybe, that would be convenient. I may be a bit tired, or read too much from your response, but I feel that you’re annoyed when someone points out that more centralization isn’t the best solution to centralization issues.

            I fear that, because Docker sold us the idea that there’s only their own Dockerfile format, that builds on images that must be hosted on Docker Hub, we didn’t think about alternatives – well, until they added “and now you must pay to keep using things that we offered for free”. Let’s not discard all discussions on the topic of decentralization too quickly, as we could improve on Docker, and we need more ideas.

            1. 5

              I imagine that image producers could also agree on a one distribution mechanism that doesn’t have to rely on a handful of centralized services. It doesn’t have to be a mix of incompatible file transfer protocols either, that would be really impractical.

              In the various threads about Docker, people are already proposing all sorts of incompatible transfer protocols and hosting mechanisms, and displaying no interest in cooperating with each other on developing a unified standard.

              The best I think we can hope for is that we get a duopoly of popular container registries, so that tooling has to accommodate the idea that there is no single “default” (as happens currently with Docker treating their own registry as the default). But I think it’s more likely that network effects and cohesiveness of user experience will push most people onto a single place, likely GitHub.

              The main reason was (is) probably convenience yes, but I think that Git has a different story: I may be wrong, but I don’t think that Docker was about decentralizing anything, ever.

              My point was to say “look, this thing that was explicitly designed to be decentralized, and is in a category of tools that literally have the word ‘decentralized’ in the name, still ended up centralized, that does not give high confidence that trying to decentralize Docker registries, which were not even designed with decentralization in mind, will succeed in a meaningful way”.

              Let’s not discard all discussions on the topic of decentralization too quickly, as we could improve on Docker, and we need more ideas.

              I will just be brutally honest here: the success rate of truly “decentralized” systems/services in the real world is incredibly low. Partly this is because they are primarily of interest to tech people who are willing to put up with a tool that is metaphorically covered in poisoned razor blades if it suits some theoretical ideal they have in mind, and as a result the user experience of decentralized systems/services tends to be absolutely abysmal. Partly this is because social factors end up concentrating and centralizing usage anyway, and this is a problem that is hard/impossible to solve at the technical level, but most people who claim to want decentralized systems are only operating on the technical design.

              So I do discard discussions on decentralization, and quickly. Perhaps this time someone really will come up with a decentralized solution that gets mass adoption and manages to stay de facto decentralized even after the mass adoption occurs. But to me that statement is like “I should buy a lottery ticket, because perhaps this time I will win”.

              1. 1

                Well, what can I say. These are some strong opinions, probably soured from various experiences. I’m not trying to convince you in particular, but do hope that we can aim higher than a duopoly. Thanks for the chat. :)

            2. 2

              This is the same reason why git, despite being a DVCS, rapidly centralized anyway. Having to keep track of dozens of different self-hosted “decentralized” instances and maintain whatever account/credentials/mailing list subscriptions are needed to interact with them is an absolute pain

              GitHub has benefits from centralisation because, as a publisher of an open-source project, I want it to be easy for people to file bugs and contribute code. Most people who might do this have a GitHub account and so anything either hosted on GitHub or hosted somewhere with sign-in with GitHub makes this easy.

              I’m not sure that this applies to container images. People contributing to the code that goes into a container or raising issues against that code will still go to GitHub (or similar) not DockerHub. People interact with DockerHub largely via {docker,buildah} pull, or FROM lines in Dockerfiles. These things just take a registry name and path. As a consumer of your image, it makes absolutely no difference to me what the bit before the first / in your image name is. If it’s docker.io, quay.io, azurecr.io, or whatever, the tooling works in precisely the same way.

              The only place where it makes a difference is in private registries, where ideally I want to be able to log in with some credentials that I have (and, even more ideally, I want it to be easy to grab the image in CI).

              I see container registries as having more in common with web sites than code forges. There are some incentives towards centralisation (lower costs, outsourced management), but they’re not driven by network effects. I might host my web site using GitHub Pages so that GitHub gets to pay for bandwidth, but visitors find it via search engines and don’t care where it’s hosted, especially if I use my own domain for it.

            3. 2

              Indeed. You need a place that has offers large amounts of storage and bandwidth for low cost, if not free entirely. And bandwidth has had a ridiculous premium for a very long time, which makes it very hard to find such a service.

              You could find VPS providers at a reasonable rate for both of these, but now you’re managing a server on top of that. I’m not opposed to that effort, but that is not a common sentiment. 😅

              1. 16

                Time for a project that combines Docker with Bittorrent.

                1. 1

                  A shared redirector should need less space/bandwidth than actual independent hosting, but backend-hopping would become easier… And the projects themselves (who don’t want to annoy the backends too much) don’t even need to acknowledge it aloud.

              1. 13

                Or if it is https but a connection attempt to the endpoint fails with a TLS error

                This I agree with rejecting. Broken links are broken links, be it a 404 or a TLS error.

                If the link is http

                This I disagree with, wholeheartedly. Even with my history working at ISRG on Let’s Encrypt, we still found people who did not want to have an HTTPS server. Period. To forcefully ignore that segment of the internet would be foolhardy in my opinion.


                Perhaps an alternative could be “automatically replacing HTTP links with HTTPS if it succeeds”. If someone pastes an HTTP link to a site that supports HTTPS, automatically upgrading the link would be nice.

                1. 2

                  we still found people who did not want to have an HTTPS server. Period.

                  That’s bizarre and inexplicable, but to each their own.

                  To forcefully ignore that segment of the internet would be foolhardy in my opinion.

                  Lobsters has no obligation to cater to them and their bizarre insecure decisions either. I am not proposing to force them off the internet, just to not give them link juice from here.

                  EDIT: I’m not even proposing anything that wild here, Chrome literally already does this, e.g. go to the recent submission https://lobste.rs/s/7lpwis/lisa_source_code_understanding_clascal and click through to the link. Chrome shows an error:

                  The connection to eschatologist.net is not secure You are seeing this warning because this site does not support HTTPS. Learn more

                  And they heavily encourage you to not go to that page anyway. At which point most reasonable people should turn back.

                  1. 9

                    we still found people who did not want to have an HTTPS server. Period.

                    That’s bizarre and inexplicable, but to each their own.

                    What’s inexplicable about not wanting to manage the extra security risk for the server?

                    And, if we care about client security, we should have a tag page-requires-js with a penalty like rant

                    1. 2

                      What’s inexplicable about not wanting to manage the extra security risk

                      Do you really consider setting up a basic TLS termination reverse proxy more of a security risk than serving insecure web pages? In that case you are in disagreement with more than half the web, and I don’t see any point in discussing further.

                      1. 7

                        For the server, a basic TLS termination server is surely more risk. Heartbleed-class things are less likely with an HTTP server written in a memory-safe language twenty five years ago and getting only bugfixes, not huge mandatory (because TLS versions need to change) changes, since then.

                        Actually, from a perspective of websites as a population (not from the point of view of configuring a single one), an average HTTP website submitted to Lobste.rs is safer to read than an average HTTPS website. Because an HTTP site will almost surely be old or at least old-style enough to be readable without enabling scripts and without even enabling images (pure-HTML attacks are not that widespread even when someone bothers to intercept), and an HTTPS website has a non-negligible chance of making text Javascript-only and serving Google ads (which are known to let exploits slip from time to time).

                        1. 1

                          Interesting, I’ve never heard anyone argue with a straight face before that HTTP sites are actually more secure than HTTPS sites. This is a new one to me, and I’m sure to everyone who has been pushing for HTTPS for more than a decade now. If all the existing arguments for a secure web won’t convince you, then certainly neither will I.

                          1. 1

                            They are more secure for the server side, and if Heartbleed has not convinced you that TLS adds risks for the server…

                            1. 1

                              I mean this is like conducting a survey to see what 10 dentists think of your toothpaste, finding out that only 3 of them recommend it, then claiming that ‘3 out of 5 dentists recommend our toothpaste’. If you disregard all the other security benefits and look narrowly at only the buffer overflow issue of C-based TLS systems, then sure you can claim that it’s unsafe. You’d also have to ignore options like the Caddy web server, which are written in Go and don’t suffer from C’s memory unsafety issues: https://caddyserver.com/

                              Written in Go, Caddy offers greater memory safety than servers written in C. A hardened TLS stack powered by the Go standard library serves a significant portion of all Internet traffic.

                    2. 5

                      Lobsters has no obligation to cater to them and their bizarre insecure decisions either

                      Why is it bizarre? Why not support HTTP links? Pushing folks on way or another will create stubbornness.

                      Another way of asking it: Why are unencrypted sites not worthy of “link juice” from Lobsters or anywhere else? You seem to have a philosophy about the value of HTTPS that seems slightly incompatible with other folks. I bet you & I broadly agree on many things related to HTTP vs HTTPS, but this aspect is something I don’t understand yet.

                      1. 2

                        Why is it bizarre?

                        What else would you call actively keeping an open vulnerability in the way your website works?

                        Why not support HTTP links?

                        Because in the modern web we should want to encourage security and privacy as a first-class requirement instead of an afterthought? Why does Chrome heavily discourage us from visiting http-only links?

                        Pushing folks on way or another will create stubbornness.

                        There’s nothing we can do about people who insist on being insecure and unsafe, we just have to move past them.

                        Another way of asking it: Why are unencrypted sites not worthy of “link juice” from Lobsters

                        Firstly, because modern browsers will heavily discourage you from visiting those links anyway. So by allowing these submissions we are basically saying ‘We know this is insecure but we don’t care, it’s up to you’. Secondly, Lobsters already filters out content–just check the moderation log. Tons of stories get rejected as off-topic, spam, or scams. If Lobsters is already filtering out content that can potentially annoy or harm its users, automatically filtering out insecure sites is a simple and reasonable step.

                        or anywhere else?

                        I didn’t say ‘anywhere else’, I am speaking only about Lobsters here. While the same argument may apply to other cases as well, I would judge that on a case-by-case basis rather than a blanket judgment.

                        1. 2

                          Why is it bizarre?

                          What else would you call actively keeping an open vulnerability in the way your website works?

                          I suppose it’s possible that some people who have websites actively choose not to use HTTPS, such as by replying “please don’t” to an email message from a managed-hosting provider saying “Your website will be upgraded automatically to HTTPS unless you opt out within the next 30 days.”

                          Still, I suspect it’s more common for people who have websites that don’t support HTTPS not to be choosing actively not to support it — maybe they set the website up in the 1990s or otherwise before Let’s Encrypt; maybe they don’t understand what HTTPS is or why it would be useful.

                          Now, I imagine a person who has a website that would be linked from Lobsters is more likely to know what HTTPS is and value it, but they might have a website set up before Let’s Encrypt (or before they understood HTTPS) that they don’t actively maintain; they might have forgotten the website exists; they might be missing or dead.

                          1. 4

                            Or maybe they want their website to be cacheable for people on slow connections.

                            1. 2

                              I was replying to a person who said that some people actively refused to set up HTTPS on their sites.

                              In the case that a site doesn’t have HTTPS because it’s not maintained–well, the security risk speaks for itself. It’s an unmaintained site, could be taken over by all sorts of malware.

                              In the case that the site creator actively refuses to use HTTPS–well, the security risk speaks for itself again. The creator thinks they know better than security practices that have been the norm for more than a decade. You can tell where that will lead.

                              1. 2

                                I was replying to a person who said that some people actively refused

                                Ah, yes, I had forgotten that context by the time I wrote my comment, for which mistake I apologize.

                                It’s an unmaintained site, could be taken over by all sorts of malware.

                                I suppose it’s true that the httpd or OS could have a vulnerability that could allow overwriting the website content. I wonder how commonly such attacks succeed in practice for static websites. (On the other hand, if it’s an unmaintained WordPress instance….)

                                In the case that the site creator actively refuses to use HTTPS–well, the security risk speaks for itself again. The creator thinks they know better than security practices that have been the norm for more than a decade. You can tell where that will lead.

                                I don’t think it logically follows from “the site creator actively refuses to use HTTPS” that “[t]he creator thinks they know better”. The creator could accept that HTTPS would be an improvement but still decide that they lack the competency and/or time to support it.

                        2. 2

                          And yet, I’ll be the judge of whether I go to it or not. If you’re really so paranoid about this, maybe a warning to say no https available. Blocking outright for legitimate sites is pretty meh.

                        3. 2

                          Or if it is https but a connection attempt to the endpoint fails with a TLS error

                          … Broken links are broken links, be it a 404 or a TLS error.

                          TLS error links aren’t the same kind of broken links as 404 links. Generally, in the event of a TLS error, a client that doesn’t care about security can ignore the error [*] and view the content. On the other hand, generally, in the event of a 404, a client can’t view the content no matter what—the server just isn’t showing the content at all.

                          So, if the grounds for rejecting TLS error links is based solely on this particular argument (i.e. that they’re the same as 404 links), then I don’t agree with rejecting TLS error links.

                          I agree with the rest of the comment: plain HTTP links should not be rejected.

                          [*] by clicking through the security warning in web browsers, by using the -k flag in curl, etc.

                        1. 12

                          Another vote against. I’ve seen at least one site that served different content on 443 and 80. How do you know the https connection is the right one?

                          1. 4

                            Huh, I haven’t actually encountered any sites that were serving different content on 443 and 80, but always theorized it would happen. In my comment I suggested a naive approach of always replacing HTTP links with HTTPS, but that idea runs straight into this.

                            Perhaps making an async call to Lobsters’ backend when submitting a story to validate various things and presenting an option to the submitter. i.e. something like “Hey, before you submit this story, we noticed it also worked on HTTPS. <Click here to update the URL.>”

                            1. 4

                              How can you tell the difference between a site serving different content on port 443 and 80 and a MITM attack?

                              1. 4

                                Easily (in some specific cases): 443 content is the hosting server’s global homepage (apparently the SNI was not configured when it became an option), and 80 content is the content that the creator sometimes references in detail over other channels.

                              2. 3

                                Because at least you know that the https content is very, very unlikely to have been tampered with by a third party. With the http content, you have literally zero guarantees.

                                1. 1

                                  If content being tampered with is your problem, I’m looking forward to your suggestion to reject sites that use third party Javascript, or that use Cloudflare.

                                  I trust my ISP not to tamper with my traffic, even if it’s HTTP. I don’t trust Cloudflare not to tamper with content, even if it’s served from (their) HTTPS enabled webserver.

                                  1. 1

                                    You may trust your ISP, but do you trust every endpoint on the internet that routes your traffic between your computer and the remote host? I’m just curious, the security, validity, and other reasons for using HTTPS on the internet have been well known and publicized for more than a decade now. Are you seriously arguing that all of these arguments are invalid and HTTP is perfectly OK?

                                    1. 3

                                      but do you trust every endpoint on the internet that routes your traffic between your computer and the remote host?

                                      No, I don’t. But since I cannot verify the TLS connection between Cloudflare and the actual server, I have no choice but to trust that. I can only verify that the connection between me and the MITM (Cloudflare) is secure.

                                      • With HTTP, someone close to the server may MITM, but my browser warns me about the possibility so I can be cautious.
                                      • With Cloudflare, someone close to the server may MITM if TLS is not used between Cloudflare and the server, but I can not find out if that’s the case. My browser won’t warn me about the possibility and says everything is fine.

                                      You seem to argue for blocking the first scenario, but the second scenario is fine for you, even though the risk is the same; the connection close to the server may be unencrypted.

                                      1. 1

                                        With Cloudflare, someone close to the server may MITM if TLS is not used between Cloudflare and the server

                                        Sure, or they may not be able to, if TLS is used as strongly recommended by Cloudflare for years now: https://blog.cloudflare.com/cloudflare-ca-encryption-origin/

                                        but the second scenario is fine for you,

                                        No, it’s not. Don’t put words in my mouth, please. Just because I am advocating for a security measure doesn’t mean I blindly believe it guarantees total protection. My point is that something is better than nothing. This should not be a surprising argument. We make the same case for many safety and security measures in computing, from static typechecking to linting to unit tests.

                              1. 14

                                For those who are not in the know and are otherwise likely to skip over due to a too-quick reading of the title, here’s some additional context (included in the link itself, but summarized here):

                                • This is not about Wikipedia being funded
                                • Abstract Wikipedia is a new project
                                • It’s an ambitious one, with highly technical requirements
                                • These technical requirements are risky, per a report

                                I didn’t even know about Abstract Wikipedia, and appreciate it’s vision. Not sure if it’s really that useful, though. And the very real technical concerns suggest it’s, at best, ahead of it’s time. At worst, an unfortunate example of the complexity of representing the sum of human knowledge and experience in code.

                                1. 1

                                  As somebody who created a bunch of interlingual pages for English WP, I would have loved to have some sort of abstraction for managing them. I seem to recall working on a bunch of pages for individual Japanese kana, for example, and you can see for yourself that those pages have many similar parts which could be factored out using some abstractive system.

                                1. 3

                                  I wonder if the version numbers above 120+ will un-freeze that part of the User-Agent.

                                  1. 2

                                    This is an interesting resolution to a open source software case. No monetary compensation, even for legal fees. Instead, a disclaimer must be added everywhere mentioning that the products (Houdini 6 and Fat Fritz 2) are derived from open source software and that they are not allowed to distribute them or any other derivatives for a year. Futhermore, they must hire a “Free Software Compliance Officer”.

                                    Quite a different result than previous lawsuits I’ve heard about. It seems that the Stockfish authors care more about the recognition and continued sanctity of the license rather than punishing ChessBase.

                                    1. 2

                                      indeed, that’s why I posted it here despite not being sure if it was relevant and confirmed by the initial downvote :)

                                      I used to hate the copyleft licenses because of their viral nature but having seen companies leeching the work of open-source developers, I’ve come around to using MPL 2 for libraries and GPL 2 for applications.

                                    1. 31

                                      Regardless of how someone feels about these changes, they seem to be well implemented and alternatives readily provided through the use of standard formats. It’s nice to see these sorts of changes being communicated clearly and with plenty of time.

                                      1. 30

                                        I especially like the “and if you don’t like it, here’s how you can take all your data with you when you go”

                                        1. 14

                                          This kind of grown-up attitude & approach is alone sufficient in significantly raising my interest in the platform.

                                          1. 4

                                            It’s a really nice platform. I use it exclusively for personal projects now, and I’m loving it. I haven’t done much collaboration on the platform, so I can’t say much about that, but otherwise it’s great.

                                            I know Drew kind of built a reputation for himself, and think what you want of him, but he’s doing right by FOSS with Sourcehut, I feel.

                                      1. 4

                                        This is spam, it’s just a corporate blog post bashing a self-hosted tool and promoting itself: a commercial alternative.

                                        1. 2

                                          The thing is that author of this article is also main contributor to Bors-NG.

                                          1. 1

                                            Also, notice how sloppy people are about quality dollars. There is no discussion of “how much does it cost when it happens”, “how often does it happen”, and “how much does it cost to prevent it”. I suspect that it is not worthwhile except for the largest of large projects.

                                          1. 21

                                            I disagree with the response, assuming that the original question was using the original icon font (Font Awesome) as a supplement for their buttons. That is, buttons and other UI elements were composed of a combination of both icon and text. I think that is a likely interpretation given that they have hidden the icons from screen readers.

                                            In that situation, I think that it’s perfectly acceptable. The ambiguity is mitigated (if not entirely removed) by the text next to the icon. Even with the “risk” that the emojis look out of place compared to the rest of the website, I think it’s still fine. (I’m also of the opinion that websites should conform more to the client’s OS rather than fight against it. That websites should blend in with the rest of the native applications rather than look distinct.)

                                            1. 14

                                              I agree. The answer is responding to a strawman. The question wasn’t if replacing text with emojis was a bad idea, but rather if it was a good idea to replace icons with emojis.

                                              Secondly, the response is commenting that older devices or OS might not have the required support, but the question does specify that this is an internal app, so presumably they have control of what devices and what versions of OS the app will run on and can make a decision based on that.

                                              Thirdly, the answer is conflating bad design and emoji use. The question is asking if a button with an emoji, for example [✔ OK] would work well as an interface, yet the answer manages to present this as an example where that could be misinterpreted:

                                              often 👥 will misinterpret emojis that their peers 📦️➡️ to ➡️👥. ➡️👤 do ❌ 🙏 to have a sudden misunderstanding between 🆗 ➕ apparently also 🆗 emoji like this: 🙆;

                                              And finally, they seem to believe the emojis would be inserted in the middle of text strings instead of being consistently formatted as a pictogram for buttons or messages.

                                              I give the answer a massive 👎

                                            1. 8

                                              Yeah, SELinux documentation sucks a lot and the error messages leave a ton to be desired. Given the rise of containers and the power available with systemd’s options, I appreciate the post’s title. You can achieve most of the practical benefits alone with either of those solutions.

                                              Personally, I still disagree with turning it off, but having a broken and nonfunctional system is worse than an insecure one. So turning it off, temporarily I hope, is usually the right call. (Defense in depth is a nice ideal.)

                                              One of the things I’m happy with is that I’ve been able to automate (through ansible) using my own custom policies to apply to systems. Though that did take me weeks of reading documentation and even reading source code to piece together. Ugh. Not everyone is willing to put up with that, I know.

                                              1. 24

                                                This really feels like the author has a series of (not so good IMO) complaints about WebPKI and then decides that since it isn’t perfect for their situation, throw it all away rather than accepting even incremental improvements.

                                                We do not need to have perfect security, just slightly better than yesterday’s.

                                                Also, the reasons why the earlier free certificate authorities were never trusted was because they were awful at security. StartSSL had major problems that were only revealed by forced audits to continue to be trusted. Before they were accepted though, it was completely trusted by a huge swath of software. The security requirements that browsers and other maintainers mandate to be part of their trusted root programs meant that these free programs didn’t measure up. (Good or bad thing? I think net good, but it does limit the CA industry to only those that can support the ongoing financial burden. That generally precludes those without money from participating.)

                                                If you want a free certificate that is not issued by a US firm, check out ZeroSSL. They are an Austrian company, so now you then have EU’s rules to deal with instead.

                                                (NB: I’m a former Let’s Encrypt employee. I do have a bias here.)

                                                1. 12

                                                  A long standing bug in Firefox suddenly got exposed because some service updated their HTTP3 implementation. Possibly on Google or Cloudflare’s side, both of which are used by Mozilla for their infrastructure. And Firefox will check in (unless told otherwise) early on, making it possible to hit the bug very early on. Resulting it Firefox being unable to load any other page with that thread. Ouch.

                                                  1. 3

                                                    I was wondering why Firefox was suddenly making my fans go and stop loading anything! Wow, that’s pretty messed up.

                                                  1. 5

                                                    This is good, but it, IMO, should be SHA256 or Blake2 instead, which are considered cryptographically strong unlike MD5.

                                                    1. 2

                                                      Since this is just a validation script you could theoretically make it generic enough to process a handful of different hash types so that it’s more compatible.

                                                      1. 2

                                                        I was just thinking about this, and had two thoughts:

                                                        • Generalize it by adding a CLI flag to indicate which hashing function is being used. (Something like, -n md5, -n sha256, etc)
                                                        • And/or also supporting the Multihash format
                                                        1. 2

                                                          Thought about adding other formats, but considering I was nerd-sniped, I had other things I intended to do today 😅

                                                          Definitely gonna read up on Multihash, as this is the first time I’ve heard of it.

                                                        2. 1

                                                          Feature creep 😁

                                                          But adding that into the script wouldn’t be too much of an excercise.

                                                        3. 1

                                                          You’re absolutely right, but most sites that I’ve come across that use the pattern only provide MD5.

                                                          I thought about adding a flag to specify the type of sum, but feature creep 😁

                                                          1. 1

                                                            Yeah, but how would that help you run a script where the MD5 was provided :)

                                                          1. 1

                                                            Does anyone know of a similar tool for Python-based projects? It looks like it could be fairly handy, if not a tad overkill.

                                                            1. 2

                                                              I’m not familiar with a library that provides the --changelog feature out of the box, but it seems like a pretty solid idea to do that.

                                                              1. 1

                                                                If you are talking about python projects installable via pip, you can ship the CHANGELOG.md file with the build. read here after that, you can just write a similar regex for fetching the version numbers as well

                                                                1. 11

                                                                  There’s also the sorta-equivalent for Linux, as itemized by systemd. I don’t think they are particularly well-adopted, but hopefully will be, which exists as a superset of BSD’s.

                                                                  1. 5

                                                                    I read the 200s there as a list of exit codes to avoid using lest my program crashing be mistaken for some specific behaviour which systemd subprocesses exhibit and the daemon has particular expectations about.

                                                                    1. 3

                                                                      Shells typically map signal death of a process into $? by taking the signal number and adding 128 to it. So where SIGINT is signal 2, $? will contain 130. Yes, this means that at the shell prompt, you can’t tell the difference, but the use of the higher exit status numbers is rare. On Linux, with a cap of 64 signals, that only blocks 128-192 from being usable by others, but still most Unix software has traditionally avoided the higher numbers.

                                                                      I see about 3 or 4 which software other than a daemon manager might want to use.

                                                                    1. 2

                                                                      Wouldn’t this create a false sense of security? Surely my browser validates an input of type “email” and warns me when the value is malformed, however, nothing stops me from manually passing an invalid e-mail-address directly via POST, most simply by replacing the input type with “text”, unless there is also server-side validation.

                                                                      1. 6

                                                                        I expect this to be used less on content sent from a client to a server, but rather in reverse, content sent from a server to a client. For example, a dynamically fetched comment on a blog post is injected into the DOM after passing through the Sanitizer API. That is, the string value in the database is untrusted.

                                                                        Of course, you could attempt to make it trusted by passing it through the Sanitizer API before even storing in the database through client side manipulation of the form, but that leads to your very concern as it could be bypassed. Run it through the Sanitizer both times? Submission and display?

                                                                        1. 2

                                                                          Sanitizing SVGs will be useful

                                                                      1. 10

                                                                        The Sanitizer API is a browser-provided implementation for the same problem DOMPurify tackles. Very nice to see this, for performance and maintenance benefits.

                                                                        MDN has documentation on what the API looks like currently, though it is in draft stages. Here is the specification itself.

                                                                        1. 9

                                                                          A String is returned with disallowed script and blink elements removed.

                                                                          No, why blink? I loved you blink, back in 1999. We’ll never forget you <3

                                                                          1. 3

                                                                            What I want is the <hype> tag again.

                                                                          2. 4

                                                                            The current MDN documentation is outdated. The latest API will not return strings.

                                                                            1. 1

                                                                              The article implies that React does this, as well. Do you know whether that’s the case?