Threads for nogweii

  1. 21

    I disagree with the response, assuming that the original question was using the original icon font (Font Awesome) as a supplement for their buttons. That is, buttons and other UI elements were composed of a combination of both icon and text. I think that is a likely interpretation given that they have hidden the icons from screen readers.

    In that situation, I think that it’s perfectly acceptable. The ambiguity is mitigated (if not entirely removed) by the text next to the icon. Even with the “risk” that the emojis look out of place compared to the rest of the website, I think it’s still fine. (I’m also of the opinion that websites should conform more to the client’s OS rather than fight against it. That websites should blend in with the rest of the native applications rather than look distinct.)

    1. 14

      I agree. The answer is responding to a strawman. The question wasn’t if replacing text with emojis was a bad idea, but rather if it was a good idea to replace icons with emojis.

      Secondly, the response is commenting that older devices or OS might not have the required support, but the question does specify that this is an internal app, so presumably they have control of what devices and what versions of OS the app will run on and can make a decision based on that.

      Thirdly, the answer is conflating bad design and emoji use. The question is asking if a button with an emoji, for example [✔ OK] would work well as an interface, yet the answer manages to present this as an example where that could be misinterpreted:

      often 👥 will misinterpret emojis that their peers 📦️➡️ to ➡️👥. ➡️👤 do ❌ 🙏 to have a sudden misunderstanding between 🆗 ➕ apparently also 🆗 emoji like this: 🙆;

      And finally, they seem to believe the emojis would be inserted in the middle of text strings instead of being consistently formatted as a pictogram for buttons or messages.

      I give the answer a massive 👎

    1. 8

      Yeah, SELinux documentation sucks a lot and the error messages leave a ton to be desired. Given the rise of containers and the power available with systemd’s options, I appreciate the post’s title. You can achieve most of the practical benefits alone with either of those solutions.

      Personally, I still disagree with turning it off, but having a broken and nonfunctional system is worse than an insecure one. So turning it off, temporarily I hope, is usually the right call. (Defense in depth is a nice ideal.)

      One of the things I’m happy with is that I’ve been able to automate (through ansible) using my own custom policies to apply to systems. Though that did take me weeks of reading documentation and even reading source code to piece together. Ugh. Not everyone is willing to put up with that, I know.

      1. 24

        This really feels like the author has a series of (not so good IMO) complaints about WebPKI and then decides that since it isn’t perfect for their situation, throw it all away rather than accepting even incremental improvements.

        We do not need to have perfect security, just slightly better than yesterday’s.

        Also, the reasons why the earlier free certificate authorities were never trusted was because they were awful at security. StartSSL had major problems that were only revealed by forced audits to continue to be trusted. Before they were accepted though, it was completely trusted by a huge swath of software. The security requirements that browsers and other maintainers mandate to be part of their trusted root programs meant that these free programs didn’t measure up. (Good or bad thing? I think net good, but it does limit the CA industry to only those that can support the ongoing financial burden. That generally precludes those without money from participating.)

        If you want a free certificate that is not issued by a US firm, check out ZeroSSL. They are an Austrian company, so now you then have EU’s rules to deal with instead.

        (NB: I’m a former Let’s Encrypt employee. I do have a bias here.)

        1. 12

          A long standing bug in Firefox suddenly got exposed because some service updated their HTTP3 implementation. Possibly on Google or Cloudflare’s side, both of which are used by Mozilla for their infrastructure. And Firefox will check in (unless told otherwise) early on, making it possible to hit the bug very early on. Resulting it Firefox being unable to load any other page with that thread. Ouch.

          1. 3

            I was wondering why Firefox was suddenly making my fans go and stop loading anything! Wow, that’s pretty messed up.

          1. 5

            This is good, but it, IMO, should be SHA256 or Blake2 instead, which are considered cryptographically strong unlike MD5.

            1. 2

              Since this is just a validation script you could theoretically make it generic enough to process a handful of different hash types so that it’s more compatible.

              1. 2

                I was just thinking about this, and had two thoughts:

                • Generalize it by adding a CLI flag to indicate which hashing function is being used. (Something like, -n md5, -n sha256, etc)
                • And/or also supporting the Multihash format
                1. 2

                  Thought about adding other formats, but considering I was nerd-sniped, I had other things I intended to do today 😅

                  Definitely gonna read up on Multihash, as this is the first time I’ve heard of it.

                2. 1

                  Feature creep 😁

                  But adding that into the script wouldn’t be too much of an excercise.

                3. 1

                  You’re absolutely right, but most sites that I’ve come across that use the pattern only provide MD5.

                  I thought about adding a flag to specify the type of sum, but feature creep 😁

                  1. 1

                    Yeah, but how would that help you run a script where the MD5 was provided :)

                  1. 1

                    Does anyone know of a similar tool for Python-based projects? It looks like it could be fairly handy, if not a tad overkill.

                    1. 2

                      I’m not familiar with a library that provides the --changelog feature out of the box, but it seems like a pretty solid idea to do that.

                      1. 1

                        If you are talking about python projects installable via pip, you can ship the CHANGELOG.md file with the build. read here after that, you can just write a similar regex for fetching the version numbers as well

                        1. 11

                          There’s also the sorta-equivalent for Linux, as itemized by systemd. I don’t think they are particularly well-adopted, but hopefully will be, which exists as a superset of BSD’s.

                          1. 5

                            I read the 200s there as a list of exit codes to avoid using lest my program crashing be mistaken for some specific behaviour which systemd subprocesses exhibit and the daemon has particular expectations about.

                            1. 3

                              Shells typically map signal death of a process into $? by taking the signal number and adding 128 to it. So where SIGINT is signal 2, $? will contain 130. Yes, this means that at the shell prompt, you can’t tell the difference, but the use of the higher exit status numbers is rare. On Linux, with a cap of 64 signals, that only blocks 128-192 from being usable by others, but still most Unix software has traditionally avoided the higher numbers.

                              I see about 3 or 4 which software other than a daemon manager might want to use.

                            1. 2

                              Wouldn’t this create a false sense of security? Surely my browser validates an input of type “email” and warns me when the value is malformed, however, nothing stops me from manually passing an invalid e-mail-address directly via POST, most simply by replacing the input type with “text”, unless there is also server-side validation.

                              1. 6

                                I expect this to be used less on content sent from a client to a server, but rather in reverse, content sent from a server to a client. For example, a dynamically fetched comment on a blog post is injected into the DOM after passing through the Sanitizer API. That is, the string value in the database is untrusted.

                                Of course, you could attempt to make it trusted by passing it through the Sanitizer API before even storing in the database through client side manipulation of the form, but that leads to your very concern as it could be bypassed. Run it through the Sanitizer both times? Submission and display?

                                1. 2

                                  Sanitizing SVGs will be useful

                              1. 10

                                The Sanitizer API is a browser-provided implementation for the same problem DOMPurify tackles. Very nice to see this, for performance and maintenance benefits.

                                MDN has documentation on what the API looks like currently, though it is in draft stages. Here is the specification itself.

                                1. 9

                                  A String is returned with disallowed script and blink elements removed.

                                  No, why blink? I loved you blink, back in 1999. We’ll never forget you <3

                                  1. 3

                                    What I want is the <hype> tag again.

                                  2. 4

                                    The current MDN documentation is outdated. The latest API will not return strings.

                                    1. 1

                                      The article implies that React does this, as well. Do you know whether that’s the case?

                                    1. 7

                                      An alternative if you don’t like these patterns, I help maintain geo_pattern. It’s originally from Github, and generates a variety of different patterns from a seed value. Also written in Ruby!

                                      1. 1

                                        Heads up: This article is from 2018, when the latest version of hex was 0.17.3. The latest version is now 0.21.3, check the changelog for anything that might be different using Hex today. I think the commands covered in this are still mostly the same, though.

                                        1. 3

                                          Is there any public proof of this permission? I checked the linked LICENSE.txt but that hasn’t changed since 2010. I’m curious about the terms the Realtek firmware is distributed under.

                                          1. 19

                                            I like that there is now yet another ACME compliant endpoint. What we need next are clients that actually support arbitrary endpoints. There are a lot of management UIs that interface with generic clients but expose Let’s Encrypt as the only option. I want to be able to plug in my own private ACME CA but still get all of the automation benefits.

                                            1. 2

                                              99% of them do so that you can use a staging URL, don’t they?

                                              1. 1

                                                Not that many even expose the option of staging LE or not. Of those that do, it’s still hardcoded to Let’s Encrypt’s staging environment. Still not generic.

                                                1. 3

                                                  All of these allow setting the API server:

                                                  The official client does it: https://certbot.eff.org/docs/using.html#changing-the-acme-server

                                                  Acme.sh does it in the article

                                                  Terraform: https://registry.terraform.io/providers/vancluever/acme/latest/docs

                                                  Traefik: https://doc.traefik.io/traefik/https/acme/#caserver

                                                  K8s cert manager: https://cert-manager.io/docs/configuration/acme/

                                                  Which ones have you used that don’t? I get that they probably mostly want sane defaults and don’t want people filling out random MitM API servers or something, but I’ve not found one that doesn’t allow me to change it.

                                                  1. 1

                                                    I’m thinking about those that sit on top of these. For example, setting up ACME in CPanel, OpenWRT, OPNsense. Or commercial software, like a website builder or managed service provider. (Installing wordpress, gitlab, or something else for you.) It has been a while since I’ve checked on these; I’d love it if they are more flexible now.

                                                    The underlying protocol implementations are flexible, indeed. There isn’t really a sysadmin/CLI focused tool that can’t accept an arbitrary endpoint. It’s the layer above that I’m frustrated with.

                                                    1. 1

                                                      Oh! Yeah, if it’s not actually an ACME client, but a client to the client, yeah, I’ve never seen those expose arbitrary endpoints either. CPanel doesn’t even use Let’s Encrypt, it uses it’s own root CA. So you’re kinda stuck trusting CPanel and not even a public entity like Let’s Encrypt.

                                            1. 3

                                              I’m a big fan of ZeroSSL for larger organizations for a lot of reasons. While LE is amazing at its mission of getting more of the internet on HTTPS, it lacks some of the features I think are well worth paying for. Having a REST API you can use to integrate internal tooling is really nice, allowing applications to request and manage their own certificates. It also offers email verification for certificates which is great for applications where the lack of IP whitelisting that Let’s Encrypt provides is a problem.

                                              All that said, if your org uses LE extensively as many do, I don’t think there is a real business usecase for randomizing it. If LE is down for a long period of time, then you might need to switch, but it seems strange to optimize for that edge case.

                                              1. 1

                                                Does the email validation mean that you can get a cert with no A record and no DNS control?

                                                1. 2

                                                  Yup! Let’s Encrypt didn’t want to deal with the headache of managing email at scale to automate that form of domain control, but there are a few RFC-standardized email addresses you can rely on, as zaynetro mentions. But the CA/Browser Forum baseline requirements only require (for “basic”/DV certs, anyways) that you prove you control a domain. There are lots of ways to do that, since that’s a social agreement.

                                                  1. 1

                                                    Sounds kind of crazy from the ACME perspective but email validation is acceptable to the CA/B baseline requirements and is basically the norm for DV certs for non-ACME issuers. The security implications aren’t great, and you need to make sure that e.g. no user can register one of the email addresses that’s acceptable to CA/B for this purpose, but it can be convenient for scenarios like issuing certificates for internal systems (not internet accessible) that use public domain names.

                                                    1. 1

                                                      it can be convenient for scenarios like issuing certificates for internal systems (not internet accessible) that use public domain names

                                                      I use DNS challenges for this purpose. Once I got tired of manually creating and cleaning the challenge-response records, I spent a few hours adapting one of the existing plugins to work with my DNS host.

                                                      I like this better than injecting email into the process.

                                                    2. 1

                                                      Looks like it: https://help.zerossl.com/hc/en-us/articles/360058295354-Verify-Domains

                                                      To verify your domains via email, first, select one of the available verification email addresses and make sure you have access to the associated email inbox. Typically, you will be able to choose between the following types of email addresses for your specific domain:

                                                      admin@domain.com, administrator@domain.com, hostmaster@domain.com, postmaster@domain.com, webmaster@domain.com

                                                  1. 2

                                                    Is there any reason for randomizing, or even rotating, the CA? I don’t understand the reasoning for it. It seems entirely unrelated to the “let’s encrypt can go down” scenario.

                                                    1. 12

                                                      If you always use LetsEncrypt, that means you won’t ever see if your ssl.com setup is still working. So if and when LetsEncrypt stops working, that’s the first time in years you’ve tested your ssl.com configuration.

                                                      If you rotate between them, you verify that each setup is working all the time. If one setup has broken, the other one was tested recently, so it’s vastly more likely to still be working.

                                                      1. 2

                                                        when LetsEncrypt stops working

                                                        That’s how I switched to ZeroSSL. I was tweaking my staging deployment relying on a lua/openresty ACME lib running in nginx and Let’sEncrypt decided to rate limit me for something ridiculous like several cert request attempts. I’ve had zero issues with ZeroSSL (pun intended). Unpopular opinion - Let’s Encrypt sucks!

                                                        1. 5

                                                          LE does have pretty firm limits; they’re very reasonable (imo) once you’ve got things up and running, but I’ve definitely been burned by “Oops I misconfigured this and it took a few tries to fix it” too. Can’t entirely be mad – being the default for ACME, no doubt they’d manage to get a hilariously high amount of misconfigured re-issue certs if they didn’t add a limit on there, but between hitting limits and ZeroSSL having a REALLY convenient dashboard, I’ve been moving over to ZeroSSL for a lot of my infra.

                                                        2. 2

                                                          But he’s shuffling during the request-phase. Wouldn’t it make more sense to request from multiple CAs directly and have more than one cert per each domain instead of ending up with half your servers working?

                                                          I could see detecting specific errors and recovering from them, but this doesn’t seem to make sense to me :)

                                                        3. 6

                                                          It’s probably not a good idea. If you have set up a CAA record for your domain for Let’s Encrypt and have DNSSEC configured then any client that bothers to check will reject any TLS certificate from a provider that isn’t Let’s Encrypt. An attacker would need to compromise the Let’s Encrypt infrastructure to be able to mount a valid MITM attack (without a CAA record, they need to compromise any CA, which is quite easy for some attackers, given how dubious some of the ‘trusted’ CAs are). If you add ssl.com, then now an attacker who can compromise either Let’s Encrypt or ssl.com can create a fake cert for your system. Your security is as strong as the weakest CA that is allowed to generate certificates for your domain.

                                                          If you’re using ssl.com as fall-back for when Let’s Encrypt is unavailable and generate the CAA records only for the cert that you use, then all an attacker who has compromised ssl.com has to do is drop packets from your system to Let’s Encrypt and now you’ll fall back to the one that they’ve compromised (if they compromised Let’s Encrypt then they don’t need to do anything). The fail-over case is actually really hard to get right: you probably need to set the CAA record to allow both, wait for the length of the old record’s TTL, and then update it to allow only the new one.

                                                          This matters a bit less if you’re setting up TLSA records as well (and your clients use DANE), but then the value of the CA is significantly reduced. Your DNS provider (which my be you, if you run your own authoritative server) and the owner of the SOA record for your domain are your trust anchors.

                                                          1. 3

                                                            There isn’t any reason. The author says they did it only because they can.

                                                            1. 2

                                                              I think so. A monoculture is bad in this case. LE never wanted to be the stewards of ACME itself, instead just pushing the idea of automated certificates forward. Easiest way to prove it works is to do it, so they did. Getting more parties involved means the standard outlives the organization, and sysadmins everywhere continue to reap the benefits.

                                                              1. 2

                                                                To collect expiration notification emails from all the CAs! :D

                                                                1. 2

                                                                  The article says “Just because I can and just because I’m interested”.

                                                                1. 8

                                                                  I remember learning about other CAs that support ACME several months back from a Fediverse admin. I’m really glad there are alternatives. Mozilla did the right thing by making the entire process open. I feel like this is more important that ever.

                                                                  Mozilla has had financial troubles, and although it’s unlikely they would lose funding for LetsEncrypt, they certainly could. Second, Mozilla has made a lot of questionable political decisions, and has made it clearly they care a lot about politics internally within the non-profit. Having alternatives is essentially for the day when Mozilla says, “We refuse to grant you a TLS certificate because of what’s hosted on your domain.”

                                                                  1. 15

                                                                    Mozilla helped bootstrap Let’s Encrypt, with money, staff and expertise but Let’s Encrypt is a completely independent entity for a while now.

                                                                    1. 6

                                                                      Mozilla helped, but Linux Foundation did more in terms of staffing.

                                                                      Source: Was hired by Linux Foundation to work on LE, back in 2016.

                                                                    2. 9

                                                                      Mozilla does not own Let’s Encrypt directly, it’s a non-profit.

                                                                      The EFF is a sponsor, so denying someone a cert for political reasons will be a hard sell to them.

                                                                    1. 3

                                                                      Repology is a way to check a bunch of Linux distributions’ version of glibc included in their respective repositories: https://repology.org/project/glibc/versions

                                                                      There doesn’t seem to be a single major distro that’s upgraded to 2.34 yet in a stable release. It’s hard to rapidly release such an integral library, so we might be waiting a while before the rebuilds are finished everywhere.

                                                                      1. 4

                                                                        This is not how distros work, at least most of them.

                                                                        They usually ship the version of a library that was stable when they made their last stable release and then backport important fixes.

                                                                      1. 2

                                                                        But did you know that PowerShell has a built-in SSH Client?

                                                                        That’s incorrect; PowerShell doesn’t have SSH built-in, Microsoft did a bunch of work to port OpenSSH to Windows. (Source code) If you install the OpenSSH.Client feature (default in Windows 10 since 1809) you will have OpenSSH binaries located in C:\Windows\System32\OpenSSH.

                                                                        Otherwise, it’s cool to see SSH available out of the box in Windows!

                                                                        1. 1

                                                                          The thing I’d really love to see from the Windows SSH client is integration with the Windows Hello infrastructure. Windows provides some high-level APIs for generating RSA keys that are stored in the TPM if you have one or in the Secure Kernel (effectively a separate VM, isolated from the Windows kernel) if you don’t. Access to these is restricted by biometrics. If you have a user-level compromise, you can’t use them (though you can probably trick the user into using them), if you have a kernel-level compromise then you can fake the biometrics and do live attacks but you still can’t exfiltrate the keys (if they’re stored in the TPM, you can’t exfiltrate them even with a hypervisor compromise). I’d love to have something that generates RSA keypairs using the Windows Hello APIs and talks the ssh-agent protocol. I’ve seen one project that attempted this but it looks abandoned.