1. 12

    A long standing bug in Firefox suddenly got exposed because some service updated their HTTP3 implementation. Possibly on Google or Cloudflare’s side, both of which are used by Mozilla for their infrastructure. And Firefox will check in (unless told otherwise) early on, making it possible to hit the bug very early on. Resulting it Firefox being unable to load any other page with that thread. Ouch.

    1. 3

      I was wondering why Firefox was suddenly making my fans go and stop loading anything! Wow, that’s pretty messed up.

    1. 5

      This is good, but it, IMO, should be SHA256 or Blake2 instead, which are considered cryptographically strong unlike MD5.

      1. 2

        Since this is just a validation script you could theoretically make it generic enough to process a handful of different hash types so that it’s more compatible.

        1. 2

          I was just thinking about this, and had two thoughts:

          • Generalize it by adding a CLI flag to indicate which hashing function is being used. (Something like, -n md5, -n sha256, etc)
          • And/or also supporting the Multihash format
          1. 2

            Thought about adding other formats, but considering I was nerd-sniped, I had other things I intended to do today 😅

            Definitely gonna read up on Multihash, as this is the first time I’ve heard of it.

          2. 1

            Feature creep 😁

            But adding that into the script wouldn’t be too much of an excercise.

          3. 1

            You’re absolutely right, but most sites that I’ve come across that use the pattern only provide MD5.

            I thought about adding a flag to specify the type of sum, but feature creep 😁

            1. 1

              Yeah, but how would that help you run a script where the MD5 was provided :)

            1. 1

              Does anyone know of a similar tool for Python-based projects? It looks like it could be fairly handy, if not a tad overkill.

              1. 2

                I’m not familiar with a library that provides the --changelog feature out of the box, but it seems like a pretty solid idea to do that.

                1. 1

                  If you are talking about python projects installable via pip, you can ship the CHANGELOG.md file with the build. read here after that, you can just write a similar regex for fetching the version numbers as well

                  1. 11

                    There’s also the sorta-equivalent for Linux, as itemized by systemd. I don’t think they are particularly well-adopted, but hopefully will be, which exists as a superset of BSD’s.

                    1. 5

                      I read the 200s there as a list of exit codes to avoid using lest my program crashing be mistaken for some specific behaviour which systemd subprocesses exhibit and the daemon has particular expectations about.

                      1. 3

                        Shells typically map signal death of a process into $? by taking the signal number and adding 128 to it. So where SIGINT is signal 2, $? will contain 130. Yes, this means that at the shell prompt, you can’t tell the difference, but the use of the higher exit status numbers is rare. On Linux, with a cap of 64 signals, that only blocks 128-192 from being usable by others, but still most Unix software has traditionally avoided the higher numbers.

                        I see about 3 or 4 which software other than a daemon manager might want to use.

                      1. 2

                        Wouldn’t this create a false sense of security? Surely my browser validates an input of type “email” and warns me when the value is malformed, however, nothing stops me from manually passing an invalid e-mail-address directly via POST, most simply by replacing the input type with “text”, unless there is also server-side validation.

                        1. 6

                          I expect this to be used less on content sent from a client to a server, but rather in reverse, content sent from a server to a client. For example, a dynamically fetched comment on a blog post is injected into the DOM after passing through the Sanitizer API. That is, the string value in the database is untrusted.

                          Of course, you could attempt to make it trusted by passing it through the Sanitizer API before even storing in the database through client side manipulation of the form, but that leads to your very concern as it could be bypassed. Run it through the Sanitizer both times? Submission and display?

                          1. 2

                            Sanitizing SVGs will be useful

                        1. 10

                          The Sanitizer API is a browser-provided implementation for the same problem DOMPurify tackles. Very nice to see this, for performance and maintenance benefits.

                          MDN has documentation on what the API looks like currently, though it is in draft stages. Here is the specification itself.

                          1. 9

                            A String is returned with disallowed script and blink elements removed.

                            No, why blink? I loved you blink, back in 1999. We’ll never forget you <3

                            1. 3

                              What I want is the <hype> tag again.

                            2. 4

                              The current MDN documentation is outdated. The latest API will not return strings.

                              1. 1

                                The article implies that React does this, as well. Do you know whether that’s the case?

                              1. 7

                                An alternative if you don’t like these patterns, I help maintain geo_pattern. It’s originally from Github, and generates a variety of different patterns from a seed value. Also written in Ruby!

                                1. 1

                                  Heads up: This article is from 2018, when the latest version of hex was 0.17.3. The latest version is now 0.21.3, check the changelog for anything that might be different using Hex today. I think the commands covered in this are still mostly the same, though.

                                  1. 3

                                    Is there any public proof of this permission? I checked the linked LICENSE.txt but that hasn’t changed since 2010. I’m curious about the terms the Realtek firmware is distributed under.

                                    1. 19

                                      I like that there is now yet another ACME compliant endpoint. What we need next are clients that actually support arbitrary endpoints. There are a lot of management UIs that interface with generic clients but expose Let’s Encrypt as the only option. I want to be able to plug in my own private ACME CA but still get all of the automation benefits.

                                      1. 2

                                        99% of them do so that you can use a staging URL, don’t they?

                                        1. 1

                                          Not that many even expose the option of staging LE or not. Of those that do, it’s still hardcoded to Let’s Encrypt’s staging environment. Still not generic.

                                          1. 3

                                            All of these allow setting the API server:

                                            The official client does it: https://certbot.eff.org/docs/using.html#changing-the-acme-server

                                            Acme.sh does it in the article

                                            Terraform: https://registry.terraform.io/providers/vancluever/acme/latest/docs

                                            Traefik: https://doc.traefik.io/traefik/https/acme/#caserver

                                            K8s cert manager: https://cert-manager.io/docs/configuration/acme/

                                            Which ones have you used that don’t? I get that they probably mostly want sane defaults and don’t want people filling out random MitM API servers or something, but I’ve not found one that doesn’t allow me to change it.

                                            1. 1

                                              I’m thinking about those that sit on top of these. For example, setting up ACME in CPanel, OpenWRT, OPNsense. Or commercial software, like a website builder or managed service provider. (Installing wordpress, gitlab, or something else for you.) It has been a while since I’ve checked on these; I’d love it if they are more flexible now.

                                              The underlying protocol implementations are flexible, indeed. There isn’t really a sysadmin/CLI focused tool that can’t accept an arbitrary endpoint. It’s the layer above that I’m frustrated with.

                                              1. 1

                                                Oh! Yeah, if it’s not actually an ACME client, but a client to the client, yeah, I’ve never seen those expose arbitrary endpoints either. CPanel doesn’t even use Let’s Encrypt, it uses it’s own root CA. So you’re kinda stuck trusting CPanel and not even a public entity like Let’s Encrypt.

                                      1. 3

                                        I’m a big fan of ZeroSSL for larger organizations for a lot of reasons. While LE is amazing at its mission of getting more of the internet on HTTPS, it lacks some of the features I think are well worth paying for. Having a REST API you can use to integrate internal tooling is really nice, allowing applications to request and manage their own certificates. It also offers email verification for certificates which is great for applications where the lack of IP whitelisting that Let’s Encrypt provides is a problem.

                                        All that said, if your org uses LE extensively as many do, I don’t think there is a real business usecase for randomizing it. If LE is down for a long period of time, then you might need to switch, but it seems strange to optimize for that edge case.

                                        1. 1

                                          Does the email validation mean that you can get a cert with no A record and no DNS control?

                                          1. 2

                                            Yup! Let’s Encrypt didn’t want to deal with the headache of managing email at scale to automate that form of domain control, but there are a few RFC-standardized email addresses you can rely on, as zaynetro mentions. But the CA/Browser Forum baseline requirements only require (for “basic”/DV certs, anyways) that you prove you control a domain. There are lots of ways to do that, since that’s a social agreement.

                                            1. 1

                                              Sounds kind of crazy from the ACME perspective but email validation is acceptable to the CA/B baseline requirements and is basically the norm for DV certs for non-ACME issuers. The security implications aren’t great, and you need to make sure that e.g. no user can register one of the email addresses that’s acceptable to CA/B for this purpose, but it can be convenient for scenarios like issuing certificates for internal systems (not internet accessible) that use public domain names.

                                              1. 1

                                                it can be convenient for scenarios like issuing certificates for internal systems (not internet accessible) that use public domain names

                                                I use DNS challenges for this purpose. Once I got tired of manually creating and cleaning the challenge-response records, I spent a few hours adapting one of the existing plugins to work with my DNS host.

                                                I like this better than injecting email into the process.

                                              2. 1

                                                Looks like it: https://help.zerossl.com/hc/en-us/articles/360058295354-Verify-Domains

                                                To verify your domains via email, first, select one of the available verification email addresses and make sure you have access to the associated email inbox. Typically, you will be able to choose between the following types of email addresses for your specific domain:

                                                admin@domain.com, administrator@domain.com, hostmaster@domain.com, postmaster@domain.com, webmaster@domain.com

                                            1. 2

                                              Is there any reason for randomizing, or even rotating, the CA? I don’t understand the reasoning for it. It seems entirely unrelated to the “let’s encrypt can go down” scenario.

                                              1. 12

                                                If you always use LetsEncrypt, that means you won’t ever see if your ssl.com setup is still working. So if and when LetsEncrypt stops working, that’s the first time in years you’ve tested your ssl.com configuration.

                                                If you rotate between them, you verify that each setup is working all the time. If one setup has broken, the other one was tested recently, so it’s vastly more likely to still be working.

                                                1. 2

                                                  when LetsEncrypt stops working

                                                  That’s how I switched to ZeroSSL. I was tweaking my staging deployment relying on a lua/openresty ACME lib running in nginx and Let’sEncrypt decided to rate limit me for something ridiculous like several cert request attempts. I’ve had zero issues with ZeroSSL (pun intended). Unpopular opinion - Let’s Encrypt sucks!

                                                  1. 5

                                                    LE does have pretty firm limits; they’re very reasonable (imo) once you’ve got things up and running, but I’ve definitely been burned by “Oops I misconfigured this and it took a few tries to fix it” too. Can’t entirely be mad – being the default for ACME, no doubt they’d manage to get a hilariously high amount of misconfigured re-issue certs if they didn’t add a limit on there, but between hitting limits and ZeroSSL having a REALLY convenient dashboard, I’ve been moving over to ZeroSSL for a lot of my infra.

                                                  2. 2

                                                    But he’s shuffling during the request-phase. Wouldn’t it make more sense to request from multiple CAs directly and have more than one cert per each domain instead of ending up with half your servers working?

                                                    I could see detecting specific errors and recovering from them, but this doesn’t seem to make sense to me :)

                                                  3. 6

                                                    It’s probably not a good idea. If you have set up a CAA record for your domain for Let’s Encrypt and have DNSSEC configured then any client that bothers to check will reject any TLS certificate from a provider that isn’t Let’s Encrypt. An attacker would need to compromise the Let’s Encrypt infrastructure to be able to mount a valid MITM attack (without a CAA record, they need to compromise any CA, which is quite easy for some attackers, given how dubious some of the ‘trusted’ CAs are). If you add ssl.com, then now an attacker who can compromise either Let’s Encrypt or ssl.com can create a fake cert for your system. Your security is as strong as the weakest CA that is allowed to generate certificates for your domain.

                                                    If you’re using ssl.com as fall-back for when Let’s Encrypt is unavailable and generate the CAA records only for the cert that you use, then all an attacker who has compromised ssl.com has to do is drop packets from your system to Let’s Encrypt and now you’ll fall back to the one that they’ve compromised (if they compromised Let’s Encrypt then they don’t need to do anything). The fail-over case is actually really hard to get right: you probably need to set the CAA record to allow both, wait for the length of the old record’s TTL, and then update it to allow only the new one.

                                                    This matters a bit less if you’re setting up TLSA records as well (and your clients use DANE), but then the value of the CA is significantly reduced. Your DNS provider (which my be you, if you run your own authoritative server) and the owner of the SOA record for your domain are your trust anchors.

                                                    1. 3

                                                      There isn’t any reason. The author says they did it only because they can.

                                                      1. 2

                                                        I think so. A monoculture is bad in this case. LE never wanted to be the stewards of ACME itself, instead just pushing the idea of automated certificates forward. Easiest way to prove it works is to do it, so they did. Getting more parties involved means the standard outlives the organization, and sysadmins everywhere continue to reap the benefits.

                                                        1. 2

                                                          To collect expiration notification emails from all the CAs! :D

                                                          1. 2

                                                            The article says “Just because I can and just because I’m interested”.

                                                          1. 8

                                                            I remember learning about other CAs that support ACME several months back from a Fediverse admin. I’m really glad there are alternatives. Mozilla did the right thing by making the entire process open. I feel like this is more important that ever.

                                                            Mozilla has had financial troubles, and although it’s unlikely they would lose funding for LetsEncrypt, they certainly could. Second, Mozilla has made a lot of questionable political decisions, and has made it clearly they care a lot about politics internally within the non-profit. Having alternatives is essentially for the day when Mozilla says, “We refuse to grant you a TLS certificate because of what’s hosted on your domain.”

                                                            1. 15

                                                              Mozilla helped bootstrap Let’s Encrypt, with money, staff and expertise but Let’s Encrypt is a completely independent entity for a while now.

                                                              1. 6

                                                                Mozilla helped, but Linux Foundation did more in terms of staffing.

                                                                Source: Was hired by Linux Foundation to work on LE, back in 2016.

                                                              2. 9

                                                                Mozilla does not own Let’s Encrypt directly, it’s a non-profit.

                                                                The EFF is a sponsor, so denying someone a cert for political reasons will be a hard sell to them.

                                                              1. 3

                                                                Repology is a way to check a bunch of Linux distributions’ version of glibc included in their respective repositories: https://repology.org/project/glibc/versions

                                                                There doesn’t seem to be a single major distro that’s upgraded to 2.34 yet in a stable release. It’s hard to rapidly release such an integral library, so we might be waiting a while before the rebuilds are finished everywhere.

                                                                1. 4

                                                                  This is not how distros work, at least most of them.

                                                                  They usually ship the version of a library that was stable when they made their last stable release and then backport important fixes.

                                                                1. 2

                                                                  But did you know that PowerShell has a built-in SSH Client?

                                                                  That’s incorrect; PowerShell doesn’t have SSH built-in, Microsoft did a bunch of work to port OpenSSH to Windows. (Source code) If you install the OpenSSH.Client feature (default in Windows 10 since 1809) you will have OpenSSH binaries located in C:\Windows\System32\OpenSSH.

                                                                  Otherwise, it’s cool to see SSH available out of the box in Windows!

                                                                  1. 1

                                                                    The thing I’d really love to see from the Windows SSH client is integration with the Windows Hello infrastructure. Windows provides some high-level APIs for generating RSA keys that are stored in the TPM if you have one or in the Secure Kernel (effectively a separate VM, isolated from the Windows kernel) if you don’t. Access to these is restricted by biometrics. If you have a user-level compromise, you can’t use them (though you can probably trick the user into using them), if you have a kernel-level compromise then you can fake the biometrics and do live attacks but you still can’t exfiltrate the keys (if they’re stored in the TPM, you can’t exfiltrate them even with a hypervisor compromise). I’d love to have something that generates RSA keypairs using the Windows Hello APIs and talks the ssh-agent protocol. I’ve seen one project that attempted this but it looks abandoned.

                                                                    1. 13

                                                                      For the curious, here is the SQL which generates a view, used by this rails controller and model.

                                                                      1. 4

                                                                        Out of interest, why is this generated on demand? When you post a reply, you must be looking up the parent post already (or have its unique identifier from the generated HTML). Can’t you just look up the author of that post and add a new row to a replies table with the {parent ID, post ID}? If you have that table with an index for parent ID, it’s easy to select all replies to posts from a single person. Given the small number of posts any given individual does, it should then be fairly cheap to do a join to look up the dates of the posts and sort.

                                                                        1. 3

                                                                          It seems that there is hell lot of missing indices that should greatly improve performance of that query.

                                                                          For example indices on:

                                                                          • is_deleted
                                                                          • is_moderated
                                                                          • is_following
                                                                          • do partial index on score instead of full index

                                                                          These should provide some speedup.

                                                                          Alternatively try to create materialized view (I know that MySQL do not have them per se, but there are workarounds) or just switch to PostgreSQL.

                                                                          1. 2

                                                                            It seems that there is hell lot of missing indices that should greatly improve performance of that query.

                                                                            I don’t know about the internals of the database, but I’m guessing that the majority of comments are not deleted/moderated so the DB might still choose to do a full scan. is_following seems promising, but this comment mentions that the predicate isn’t getting pushed into the view so it may just be doing the joins for every user at once.

                                                                          2. 2

                                                                            Wowser. Looks like fun SQL.

                                                                            The normal answer would be views, but it appears from reading other comments that this isn’t an option, so we’re left with the classic cliche answers, any of which might work: RAM-up, brute-force, change engines, shard, and so forth.

                                                                            The trick probably is figuring out which of these is the easiest to try first. I’m not a Rails guy, so I don’t know the implications of switching engines to Postgres, but that intuitively feels like the right place to start playing around.

                                                                            ADD: Forgot temp/semi-permanent tables. Sometimes you can use cron and temp tables to work the same as views would.

                                                                            ADD2: Yeah, come to think of it, maybe some kind of temp shim is the way to go. You’re not going to solve the problem, but you can presort out the where clauses such that the indexes will allow the outer query to return faster. You’d need to work with it a bit to be sure, tho. A lot depends on memory size, how much data we’re talking about, and how often you need to hit it.

                                                                            1. 4

                                                                              I don’t think any of these are great solutions. The real answer is figuring out the query optimizer issue, and fixing it. Since the issue isn’t fixable within a single query (MySQL not pushing down a key predicate into the view), the next step is to perform multiple queries to work around the query optimizer. The predicate in question filters 300k rows down to ~4 (mentioned elsewhere in the thread), so the app should run that initial query, and then a second query using those results.

                                                                              For some reason people tend to avoid working around query optimizer issues with multiple queries. I can’t imagine why.

                                                                              switching engines to Postgres

                                                                              The Postgres optimizer has problems too. It can get confused about lateral joins—or queries that should be optimized to lateral joins—and end up doing the same thing as here. I’ve seen Postgres queries perform full table scans on huge tables to build up one side of a hash join, even though it estimates the other side of the join will filter to ~1 row.

                                                                            2. 2

                                                                              I would move the sub-queries in the current_vote_* columns to a left join.

                                                                              1. 2

                                                                                A reasonable compromise might be to time bound reply notifications. If the story is 1-3 months old, do you necessarily need a reply notification? How many people are replying to stories that are that old? (As a proportion of total comments.)

                                                                                With some covering indexes it might be a good enough strategy. Definitely better to have even 90% of notifications than 0%. (Unless your @cadey.)

                                                                                1. 3

                                                                                  I really enjoy receiving late replies to topics or commenting on old posts. Newer responds I catch more often just by rereading the comments.

                                                                                  1. 1

                                                                                    I do too, but if we can get most of the value with minimal changes then I think it’s a worthwhile compromise. Other options like migrating the database, setting up a VPS or changing lots of business logic are a bigger ask for the maintainers. Plus, an okay solution gives some breathing room for more permanent solution. (Or removes all impetus to make a permanent solution.)

                                                                                  2. 1

                                                                                    I find I would miss the replies most on the oldest topics

                                                                                  3. 1

                                                                                    Thank you for the links. I think I would probably try to make a view for user stories and comments (my stories, my comments - filter out negative scores, unfollowed), then try to look for children (replies), and filter/rank those. Possibly that would entail changing the relationship to easier be able to query children/replies - but I’m guessing the current “where parent_id…” should work fine.

                                                                                    It would probably mean four queries, but they might be easier to benchmark/optimize.

                                                                                    Normally I’d fight strongly for fewer/one query per controller method - but obviously in this case that’s been tried.

                                                                                    Personally would probably consider moving to pg - but doing that because of a single query/rails view is silly.