Threads for freddyb

  1. 7

    Decentraleyes is not really supported anymore. CDNs are harmful and so more people should be migrating off anyhow.

    CleanURLs does a more for stripping tracking tokens, but even it can be replaced with uBlock Origin.

    A lot of the upcoming (or already here?) domain isolation code obsolesces this ‘style’ of container usage. What you really want containers for is say you do contract work with many clients and you need to keep all of its work isolated. Or you need multiple accounts for a service that doesn’t support logging in under two accounts.

    1. 10

      Firefox’s “total cookie protection” (formerly known as “(dynamic) first party isolation”) is doing proper storage isolation that obsoletes the privacy aspects of using containers. It’s doing so by double-keying storage like cookies not just based on the site’s domain but also on the first-party domain (i.e., what’s in the address bar). This makes facebook.com (when top-level page) not share cookies with “facebook.com widget when embeddeed in foo.com)

      However, if you want to use two logins of the same site (e.g., personal gmail, work gmail) then containers are still really nice.

      1.  

        So you think using temporary containers with new domains opening in a new temporary container is not worth the effort?

        It runs pretty well now except for Gmail and Outlook.

        1.  

          I have a personal preference to remove customization and complexity over time. If it works for you, keep doing it.

          Temporary containers will obviously lose and reduce state more thoroughly. Built-in “total cookie protection” might be a bit more convenient though.

      2.  

        Thanks for the input.

        I’ve still found Decentraleyes quite effective, I have been meaning to look at LocalCDN as an alternative - I’ve added mention of that to my post. CDNs aren’t going away any time soon and are often a good thing - but I don’t want random fonts loading from around the internet just so they can track me etc…

        Good idea re: using ublock rules to perform the same behaviour as CleanURLs - I’ll look into that!

        1.  

          In my case LocalCDN broke a couple of websites instead of giving out solutions.

          Take a read at https://blog.privacyguides.org/2021/12/01/firefox-privacy-2021-update/#localcdn-and-decentraleyes though. tl;dr is that you shouldn’t worry about tracking if you have Enhanced Tracking Protection set to Strict.

          1.  

            Interesting! Thank you, I’ll make a note of this.

          2.  

            CDNs good thing

            I agree with the premise behind this post. As far as fonts, blocking Google Fonts specifically covers a majority of my concern. I’ve taken this all personally and all projects I’ve contracted, I pushed for removing CDNs where possible. Self-hosted assures better tree-shaking and that the user will get the content they seek with increased privacy and the content is less likely to be blocked and if our server is down, there’s bigger problems.

            If I were to suggest additional add-ons::

            • Redirect AMP to HTML
            • Mailvelope
            • Geo URI Handler
        1.  

          Instead of “copy plain text”, you might want to PASTE plain-text, which should work without an addon:

          • Copy something with formatting
          • Navigate to a content sink
          • Paste with SHIFT pressed (e.g., CTRL+SHIFT+V on Linux/Windows or CMD+SHIFT+V on macOS)
          1.  

            Thanks, I do use CMD+Shift+Option+V at times, but it’s awkward and I just prefer it if everything I copy is plain text by default.

          1. 3

            Without TLS, both sides are presumably using a single sendfile syscall for the body of the request, so this is mostly a test of how fast the Linux kernel can transfer to/from the Ethernet interface. Although on the receiving end I guess it’s also writing to disk.

            In trying to optimize networking code over the years I’ve often been frustrated how much slower it runs than the hardware maximum, like orders of magnitude slower. That’s because it’s doing more work on one or both ends — database queries, encoding/parsing data formats, compression/decompression and so on. In some cases it’s worth “wasting” resources keeping a copy of the data in an easy-to-stream format.

            It’s also a cool use case for append-only logs — if you have a file structured that way, it can be served as-is over HTTP as a static file, and clients can use conditional range requests to sync with it extremely cheaply.

            1. 6

              Caddy is actually not currently using sendfile (see https://github.com/caddyserver/caddy/issues/4731), and still manages to saturate the 25 Gbit/s without trouble :)

              1. 1

                Yep, and it would not explain why the go client is slower. Maybe the net package in go std lib is also doing something suboptimal?

              2. 3

                Although on the receiving end I guess it’s also writing to disk.

                The tests write to /dev/null, so not really.

              1. 4

                SeaMonkey. Wow :)

                1. 2

                  That is a name I have not heard in a very long time. (Insert Star Wars meme here.)

                1. 6

                  100 versions later

                  This seems to be playing a little loose with the facts. At some point Firefox changed their versioning system to match Chrome, I assume so that it wouldn’t sound like Firefox was older or behind Chrome in development. Firefox did not literally travel from 1.0 to 100. So it probably either has fewer or more than 100 versions, depending on how you count. UPDATE: OK I was wrong, and that was sloppy of me, I should have actually checked instead of relying on my flawed memory. There are in fact at least 100 versions of Firefox. Seems like there are probably more than 100, but it’s not misleading to say that there are 100 versions if there are more than 100.

                  That said, this looks like a great release with useful features. Caption for picture-in-picture video seems helpful, and I’m intrigued by “Users can now choose preferred color schemes for websites.” On Android, they finally have HTTPS-only mode, so I can ditch the HTTPS Everywhere extension.

                  1. 6

                    Wikipedia lists 100 major versions from 1 to 100.

                    https://en.m.wikipedia.org/wiki/Firefox_version_history

                    What did happen is that Mozilla adopted a 4 week release cycle in 2019 while Chrome was on a 6 week cycle until Q3 2021.

                    1. 4

                      They didn’t change their version scheme, they increased their release cadence.

                      1. 7

                        They didn’t change their version scheme

                        Oh, but they did. In the early days they used a more “traditional” way of using the second number, so we had 1.5, and 3.5, and 3.6. After 5.0 (if I’m reading Wikipedia correctly) they switched to increasing the major version for every release regardless of its perceived significance. So there were in fact more than 100 Firefox releases.

                        https://en.wikipedia.org/wiki/Firefox_early_version_history

                        1. 3

                          I kinda dislike this “bump major version” every release scheme, since it robs me of the ability to visually determine what may have really changed. For example, v2.5 to v2.6 is a “safe” upgrade, while v2.5 to v3.0 potentially has breaking changes. Now moving from v99 to v100 to v101, well, gotta carefully read release notes every single time.

                          Oracle did something similar with JDK. We were on JDK 6 for several years, then 7 and then 8, until they ingested steroids and now we are on JDK 18! :-) :-)

                          1. 7

                            Sure for libraries, languages and APIs, but Firefox is an application. What is a breaking change in an application?

                            1. 4

                              I got really bummed when Chromium dropped the ability to operate over X forwarding in SSH a few years ago, back before I ditched Chromium.

                              1. 1

                                Changing the user interface (e.g. keyboard shortcuts) in backwards-incompatible ways, for one.

                                And while it’s true that “Firefox is an application”, it’s also effectively a library with an API that’s used by numerous extensions, which has also been broken by new releases sometimes.

                                1. 1

                                  My take is that it is the APIs that should be versioned because applications may expose multiple APIs that change at different rates and the version numbers are typically of interest to the API consumers, but not to human users.

                                  I don’t think UI changes should be versioned. Just seems like a way to generate arguments.

                              2. 6

                                It doesn’t apply to consumer software like Firefox, really. It’s not a library for which you care if it’s compatible. I don’t think version numbers even matter for consumer software these days.

                                1. 5

                                  Every release contains important security updates. Can’t really skip a version.

                                  1. 1

                                    Those are all backported to the ESR release, right? I’ve just noticed that my distro packages that; perhaps I should switch to it as a way to get the security fixes without the constant stream of CADT UI “improvements”…

                                    1. 2

                                      Most. Not all, because different features and such. You can compare the security advisories.

                                2. 1

                                  Oh, yeah, I guess that’s right. I was focused in on when they changed the release cycle and didn’t think about changes earlier than that. Thank you.

                            1. 8

                              Reminds me of some IRC daemons doing case-insensitive comparisons for special characters and therefore treating nicknames like abcde|{} to be the same as ABCDE\[]. It’s indeed a property of the ASCII table and an XOR with 0x20 will flip those characters from one case to the other.

                              1. 18

                                Back before we had standardised 8-bit character sets with ASCII in the bottom half, we had standardised 7-bit character sets based on ASCII with “lesser used” characters replaced as needed. IRC was invented in Finland, and it turns out the ASCII variant used by Sweden and Finland replaces [\] with ÄÖÅ and {|} with äöå. So the IRC protocol defines those bytes as case-insensitively equal, and conforming implementations must do the same even you’re using an encoding that treats them as punctuation instead of letters.

                                1. 2

                                  Thanks. I didn’t know that!

                                  1. 1

                                    I think that applies to most ISO-646 variants.

                                    1. 1

                                      The encoding used by Microsoft in Japan replaced \ with ¥, so Japanese users came to expect paths to look like C:¥Windows¥system32¥ etc.

                                  1. 15

                                    2FA/MFA became so annoying I’m now meaning to get an android emulator running just to get these passcodes. I’m sick of having to grab my phone, unlock it, open some app or wait for a text, rush to type it in before it resets etc. Such a huge pain in the ass.

                                    1. 10

                                      1Password includes TOTP for mimicking 2FA. Bitwarden does as well, but it’s a bit clunkier.

                                      1. 10

                                        It’s not really “mimicking” — it’s a TOTP app generating codes the same way as any other TOTP app.

                                        1. 13

                                          It’s mimicking that there is a second factor, which often times implies a second, isolated piece of hardware

                                        2. 10

                                          KeepassXC, a non-subscription-based open source password manager, supports TOTP too.

                                        3. 6

                                          It took me a little while to get used to remembering my Yubikey, but that’s been pretty great for me. I have one that’s USB-C on one end and Apple Lightning on the other. Also, if I’d switch to a Chromium-based browser, I could use the Mac’s Touch ID for 2FA (Firefox on Mac doesn’t support it, though).

                                          Disclosure: GitHub employee, but not involved with this security effort.

                                          1. 4

                                            On Windows, you can use a TPM and on iOS / Android you can use their credential manager (which is secure on iOS and may or may not be secure on Android depending on how much of a cheapskate the handset manufacturer was). GitHub has done a fantastic job on making this usable. I haven’t used a password with GitHub for a few years for anything other than adding a new device.

                                            Disclosure: Microsoft employee, but not working directly with GitHub on anything, just a very happy user (of everything except their complete misunderstanding of the principles of least privilege and intentionality in their application of the Zero Trust buzzword).

                                          2. 4

                                            keepassxc allows storing 2FA tokens

                                            1. 3

                                              you can use oathtool to generate them directly

                                              1. 2

                                                Buy a USB-A U2F key and leave it permanently plugged into the computer

                                                1. 1

                                                  Store TOTP secret somewhere (I have it in Bitwarden, it allows me to also generate tokens directly through official clients) and run it through oauthtool to generate singe use token. On my setup I can generate & paste a token with ydotool with a single key stroke.

                                                  1. 1

                                                    2FA on GitHub rarely shows up; I do have it enabled, and I pretty much don’t need to enter a second factor through daily usage. It’s the same as 2FA with Google, which is rarely needed through daily usage. It’s pretty much for sensitive operational changes to accounts (repos in this case I guess), logging in from new devices, or from a device that hasn’t been used in a while. Other platforms are a bit more annoying, for sure, but I feel GitHub gets the balance right in this regard. I’m actually surprised they’re making it almost 1.5 years away of enforcing though… that seems a bit too long IMO.

                                                    1. 1

                                                      My OnlyKey covers FIDO2 and TOTP inputs with easy. It came with a keychain so it stays right next to my home key and my motorbike key so it’s hard to forget about it.

                                                      Passsword Store on syncing between Linux and Android has worked well aditionally and the OTP plugin covers that aspect as well.

                                                      1. 1

                                                        I have a template Perl script I use for TOTP. I copy it over and put in the new key, and run it from the shell to get a TOTP code. I try very hard not to let them use my phone for this.

                                                        1. 1

                                                          I try very hard not to let them use my phone for this.

                                                          Why though?

                                                          1. 1

                                                            If I lose my phone, I’m potentially screwed, depending on what recovery mechanisms there are. But I can back up a Perl script and store it securely.

                                                            1. 1

                                                              you can back-up the QR code from the TOTP app too. Also github gives you backup codes to print out.

                                                      1. 5

                                                        Seems kinda irresponsible to be marketing an in person event right now…

                                                        1. 9

                                                          Thank you for the feedback, but some points: the event is in Portugal, one of the countries that better handle the pandemic (for example, 91% of the population is fully vaccinated). Furthermore, since it’s the first edition in Europe, I’m not expecting many folks. If I had more than 40 attendees, it’d be unexpected.

                                                          Finally, there are two formats: in-person and online, as well as free streaming for both days. If someone doesn’t want to go to an in-person event, it’s all good.

                                                          1. 4

                                                            I’ve basically suspended organizing my in-person conferences until the pandemic is “over” enough that I can safely plan for an event 18 months out. My small events shoot for break-even and my large events generate enough revenue to subsidize our meeting space and pay volunteers a small honorarium (think: new computer money, mostly). If a large event were to get derailed because of a flare-up in the pandemic, someone would be declaring bankruptcy.

                                                            I think an in-person event can be done relatively safely for participants but the uncertainty of travel and the financial risk to the organizers and speakers just seems too much to handle right now. I’ve seen some in-person conferences succeed in areas of low concern but those have been small cities for the most part. I went to a 150-person conference in Pittsburgh in September and found a lot of the pandemic, uh, “ornamentations,” for lack of a better term to describe things like masks that ended up making people hard enough to hear that you’d end up standing within a foot of each other and screaming until you’re hoarse and having to get rapid-tested at the door because no send-out test could get back within the 72-hour maximum window at the time.

                                                            TBH, I’m pretty burned out on online conferences, too. I’ve attended a lot of them and even pivoted Heartifacts to the virtual format in 2020. I’ve just not had the depth of experience or the focus to really enjoy virtual conferences to any degree similar to an in-person conference at scale. Folks tell me that they loved Heartifacts because it was small and intimate and breakout rooms felt like talking circles at the in-person first version a couple years prior.

                                                            1. 0

                                                              Please do not impose your fear on others.

                                                              We are over 2 years in, most countries in Europe (rightfully) dropped all their mandates and it’s time to move on. The risks are comparable to a normal influenza infection, and the other tangible damages (suicides, depression, economic recession, childrens’ development) far outweigh those, in my opinion.

                                                              If you decide not to go outside, godspeed to you, but calling those who decide to live their lives again irresponsible is crass.

                                                              1. 3

                                                                I want to agree with you, but I think it’s important to note that the risk is not per country but also per person. Not everyone can afford to move on and everyone has to be understanding. For a long while.

                                                                PS: twice recovered, thrice vaccinated. have been to a conference last week.

                                                            1. 10

                                                              Even in userland I would like to at least know “can this panic” or “does this allocate” and things like that without having to recursively read docs/code.

                                                              1. 3

                                                                It seems like something that could be automated by analyzing a call graph.

                                                                1. 2

                                                                  One difficulty with this is that Rust relies on optimization to remove panics, e.g.

                                                                  let a = [1,2,3];
                                                                  for i in 0..3 { a[i]; }
                                                                  

                                                                  Can’t panic, won’t have any panicking code in the release binary, but does have panicking index call in its call graph.

                                                                  1. 1

                                                                    I can’t think of a reason why this is bad, but it is remarkable to see a compiler that actually corrects code.

                                                                2. 1

                                                                  I wonder if this could be enforced or checked at compile time.

                                                                  1. 4

                                                                    There are some truly awful hacks to do it as a library: https://docs.rs/no-panic/latest/no_panic/ I don’t think there’s any inherent reason it couldn’t be in the compiler, it’s just that it’s a language addition and no one has written an RFC for it.

                                                                    1. 3

                                                                      It would be really nice.

                                                                      For example, there’s std::thread::spawn() → JoinHandle<T> which can panic, so instead you use .spawn() → Result<JoinHandle<T>> on a thread::Builder, like the docs suggest.

                                                                      The docs for that one say it can panic "if a thread name was set and it contained null bytes", but is that really the only condition? No, it can panic for other recoverable errors as well; the Result doesn’t capture all of them.

                                                                      So it gets hard quickly.

                                                                      1. 1

                                                                        Maybe there’s a flipside that’s easier. Crates do declare where they do (or to guarantee they never do it). Obviously this will be easier snd More reasons for special crates that will be (or have been) build with those use cases in mind.

                                                                  1. 5

                                                                    I’d love to know just how long this took. Do I also correctly assume the author had a pretty good understanding of rustc or the LLVM toolchain before?

                                                                    1. 2

                                                                      luqmana made >300 commits to rust-lang/rust (see here), so yes.

                                                                    1. 8

                                                                      Uh. If you don’t like their product, use another? Cloudflare Tunnels? wireguard?

                                                                      1. 27

                                                                        People are allowed to criticize things and still use them.

                                                                        1. 3

                                                                          Yes. You’re right. And it’s totally legitimate criticism. It’s quite a dependency for what’s essentially your new network plane.

                                                                      1. 1

                                                                        X series Thinkpads (x220 then x230, x250, now x390)

                                                                        1. 10

                                                                          How many connections to google it does while compiling/booting?

                                                                          1. 11

                                                                            It’s a shame Google can’t run open-source projects. Fuchsia looks like one of the more interesting operating systems but as long as Google has complete control over what goes in and no open governance it’s not something I’d be interested in contributing to.

                                                                            1. 11

                                                                              To be fair to Google - they’re doing work in the open that other companies would do privately. While they say they welcome contributions they’re not (AFAIK) pretending that the governance is anything it’s not. On their governance page, “Google steers the direction of Fuchsia and makes platform decisions related to Fuchsia” – honest if not the Platonic ideal of FOSS governance.

                                                                              To put it another way - they’re not aiming for something like the Linux kernel. They know how to run that kind of project, I’m sure, but the trade-off would be to (potentially) sacrifice their product roadmap for a more egalitarian governance.

                                                                              Given that they seem to have some product goals in mind, it’s not surprising or wrong for them to take the approach they’re taking so long as they’re honest about that. At a later date they may decide the goals for the project require a more inclusive model.

                                                                              If the road to Hell is paved with good intentions, the road to disappointment is likely paved with the expectation that single-vendor initiatives like this will be structured altruistically.

                                                                              1. 6

                                                                                The governance model is pretty similar to Rust’s in terms of transparency: https://fuchsia.dev/fuchsia-src/contribute/governance/rfcs

                                                                                Imperfect in that curreny almost all development is done by Google employees, but that’s a known bug. But (to evolve the animal metaphors) there’s a chicken and egg issue here. Without significant external contributions it’s hard for external contributors to have a significant impact on major technical decisions.

                                                                                This same issue exists for other OSes like Debian, FreeBSD, etc - it’s the major contributors that have the biggest decision making impact. Fuchsia has the disadvantage that it’s been bootstrapped by a company so most of the contributors, initially, work for a single company.

                                                                                I’m optimistic that over time the diversity of contributors will improve to match that of other projects.

                                                                                1. 4

                                                                                  A real shame indeed. Its design decisions seem very interesting.

                                                                                  1. 1

                                                                                    yeah I’d bet the moment they have what they wanted it’ll be closed down, because this is ultimately the everything-owned without GPL -OS for google

                                                                                  2. 7

                                                                                    Probably zero. Or if you’re using 8.8.8.8 for your DNS probably less than Windows or macOS.

                                                                                    1. 5

                                                                                      They all start like this, but at the end it will be another chrome.

                                                                                      1. 5

                                                                                        Co-developed with companies as diverse as Opera, Brave, Microsoft and Igalia, as well as many independent individuals? As a Fuchsia developer that’s a future I aspire to.

                                                                                        1. 13

                                                                                          Chrome, which refused to accept FreeBSD patches with a community willing to support them because of the maintenance burden relative to market share, yet, accepted Fuchsia patches passing the same maintenance burden on to the rest of the contributors, in spite of an even smaller market share? If I were an antitrust regulator looking at Google, their management of the Chromium project is one of the first places that I’d look. Good luck building an Android competitor if you’re not Google: you need Google to accept your patches upstream to be able to support the dominant web browser. Not, in my mind, a great example of Google running an inclusive open source project.

                                                                                          1. 6

                                                                                            It’s not just about whose labor goes into the project, but about who decides the project’s roadmap. That said, maybe it’s about time to get the capability-security community interested in forking Fuchsia for our own needs.

                                                                                            1. 3

                                                                                              You should be more worried about the “goma is required to build Chrome in under 5 hours” future, in my opinion.

                                                                                              1. 0

                                                                                                Keep aspiring on google salary. It would be good to disclose conflict of interest btw.

                                                                                                1. 11

                                                                                                  I mentioned that I’m a Fuchsia developer. I’m not sure what my conflict of interest here is. I’m interested in promoting user freedom by working on open source software across the stack and have managed to find people to pay me to do that some of the time, though generally less than I would have made had I focused on monetary reward rather than the impact of my work.

                                                                                          2. 5

                                                                                            The website doesn’t have working CSS without allowing gstatic.com, so I’d guess at least one?

                                                                                            1. 1

                                                                                              /me clutches pearls

                                                                                          1. 18

                                                                                            I was prepared to groan but there are some superb sentiments in here, well articulated. I’d be interested to know why it was written now and if it’s meant to signal any changes in direction for Firefox.

                                                                                            These two lines stood out to me as things that mean a lot to me but honestly wouldn’t have expected to be said by Mozilla.

                                                                                            Our strategy is to categorize [web] development techniques into increasing tiers of complexity, and then work to eliminate the usability gaps that push people up the ladder towards more complex approaches.


                                                                                            …people have a user agent — the browser — which acts on their behalf. A user agent need not merely display the content the site provides, but can also shape the way it is displayed to better represent the user’s interests.

                                                                                            1. 8

                                                                                              I’d be interested to know why it was written now and if it’s meant to signal any changes in direction for Firefox

                                                                                              It’s more of a “justification” than a new direction: Background is that other browsers develop and ship APIs that are sometimes hard or even impossible to bring to the web platform without conflicting with the core values of Mozilla. Pushing back on individual standards can be time consuming and repetitive. (See https://mozilla.github.io/standards-positions/)

                                                                                              Among other things, this document serves as a long form explanation of these core values.

                                                                                              1. 5

                                                                                                Background is that other browsers develop and ship APIs that are sometimes hard or even impossible to bring to the web platform without conflicting with the core values of Mozilla. Pushing back on individual standards can be time consuming and repetitive. (See https://mozilla.github.io/standards-positions/)

                                                                                                Thank you for that link! I took a look at the “harmful” section and I was shocked by the amount of bad ideas. And they keep coming! It’s great that there’s at least someone opposing this madness.

                                                                                                1. 3

                                                                                                  I will never get over the ultimate in bad ideas: the SVG working group trying to give SVG raw sockets access

                                                                                                  1. 2

                                                                                                    You’re not serious?

                                                                                                    1. 2

                                                                                                      Yuuuup, there’s quite a story, but basically it boils down to mobile phone software manufacturers in Japan (I think?) were required to implement things in terms of standards (or something like that). None thought a full browser was possible at the time, and the html5 spec was still in its infancy so hadn’t split out into sub-specifications yet.

                                                                                                      That meant that to get (for example) XHR they’d need to implement a full browser. Obviously such I thing was impossible on a phone :D

                                                                                                      The solution was to give the SVG spec everything that they needed, including raw sockets, an ECMAScript subset that only had integers, etc

                                                                                                      Suffice to say that when we implemented SVG in mobile safari we said raw sockets were not a thing that would happen.

                                                                                                      The SVG WG of the era was not the most functional.

                                                                                                    2. 2

                                                                                                      Oh my gosh this sounds horrifying. Thanks for the explanation below!

                                                                                              1. 4

                                                                                                Meh. It’s not mozilla’s job to protect people from the web. This clearly hasn’t worked out so far and the issue isn’t with mozilla but with the economic system. The web is also not structured in such a way that it can be it’s own economic system. I believe the only way is to move the web to a different encoding but I am heavily biased.

                                                                                                1. 15

                                                                                                  Mozilla, the foundation says explicitly it IS their job. That’s the main purpose of Mozilla’s existence. The browser is just one of the tools to further those goals. Admittedly the most powerful and popular one that Mozilla has :)

                                                                                                  1. 7

                                                                                                    I know they say that, I read the document you linked. I am explicitly commenting on that utterance.

                                                                                                    However, I don’t care what they say because I don’t trust them. They broke my trust repeatedly and even if I did trust them I still don’t think they are capable of protecting anyone from anything while also maintaining the lifestyles / salaries that they do.

                                                                                                    1. 7

                                                                                                      Basically this. They have a declining share of the browser usage for a reason. They have consistently taken away functionality that allowed me to personalize my experience to my needs and forced their vision down my throat. I finally had enough of the abusive relationship and moved on. I don’t want another vpn service or email service either. I agree with a lot of their points, but their actions directly contradict them.

                                                                                                      1. 5

                                                                                                        Moved on to what?

                                                                                                        1. 1

                                                                                                          Vivaldi. The things I really like are vertical tabs and tab stacks.

                                                                                                1. 3

                                                                                                  Having dealt with issuing and publishing CVE details, I can tell you it’s a pain. Up until very recently, there was no good tooling and you had to your own. Synchronize stuff with bug trackers and spreadsheet. It’s no fun. And once you commit a mistake into the CVE details, it’s hard to take it back. It’s a shame that people make stuff up, especially since the internet never forgets..

                                                                                                  Maybe Apple thought they need a meta-CVE to talk about “outdated curl on macOS”?

                                                                                                  1. 4

                                                                                                    People involved in Red Hat and Debian used to help out the Django project by doing CVEs for us. Then we switched to doing our own direct through the MITRE form. I think we’ve had a couple people threaten to file their own CVEs for things we didn’t feel needed to invoke our security process, but I don’t know of any that succeeded at it yet.

                                                                                                  1. 10

                                                                                                    The full document is worth a read too. See https://webvision.mozilla.org/full/

                                                                                                    1. 5

                                                                                                      “Single Sign-On” or “Single Point of Failure”? Ugh. My heartfelt wishes to all defenders and incident response folks out there.

                                                                                                      1. 4

                                                                                                        Done well, I would always prefer a single reference login system, where it is kept up to date. The alternative tends to be a million silos of local accounts, and the corresponding mess of never-removed accounts from infrastructure changes or people leaving.

                                                                                                        In one place, you can make sure that you have a canonical reference where policies are actually applied. However, I would also steer clear of unmitigated vendor “support” access as well, instead having an account that is enabled for them when required and removed again.

                                                                                                        1. 9

                                                                                                          This is the thing people don’t seem to get: outsourcing auth to a specialist provider is still the safer option by a large margin. The amount of stuff you have to get right – not “eh, good enough” or “MVP”, but actually full-on works-every-time correct – to do auth is just staggering. Maybe we need to start expressing it in terms of something more quantifiable, like mean time to breach, but Okta having an incident (even a big, scary, bad incident) is not really an argument to move to “everybody do their own”.

                                                                                                          1. 2

                                                                                                            Hey, if I had IAM/SSO as my responsibility, I’d really shy away from doing it myself too! But having the keys to so many kingdoms in one org is just scary. I wonder if there are better architectures out there :)

                                                                                                            1. 2

                                                                                                              Excellent points! The implementation aspect is one area that roll-your-own can fall down, and it’s too bad that the consumer-facing, decentralized options haven’t really gone anywhere. (Say, Mozilla’s Persona service: https://en.wikipedia.org/wiki/Mozilla_Persona) Maybe OpenID Connect will go somewhere, but I certainly have no interest in using Google/Twitter/Facebook to log in to other sites.

                                                                                                              1. 1

                                                                                                                I would love for the industry to standardize on something that isn’t tightly coupled to JWT.

                                                                                                                1. 1

                                                                                                                  It’s a complex spec for sure, with enough ways to do it wrong. Would you suggest something else?

                                                                                                                  1. 2

                                                                                                                    Several plausible alternative token systems, designed by security people and with much better overall philosophies, have been proposed (PASETO, Macaroons, etc. etc.). None have caught on because JWT has all the inertia.

                                                                                                        1. 1

                                                                                                          (I’ve seen a prior submission about this here with a non-existing link, so I’ve submitted the new one :) )

                                                                                                          1. 2

                                                                                                            Yup. It looks like Microsoft’s BlueHat IL YouTube channel might have deleted and re-uploaded the video, causing a new URL to be created.

                                                                                                            1. 2

                                                                                                              Thanks for posting it again, I must have missed the first one.

                                                                                                            1. 15

                                                                                                              Wait, what? Do I understand this correctly?

                                                                                                              Cloudflare fixes a problem that they created themselves by inserting Cloudflare in between an app publisher and a user by asking the user to install a Cloudflare extension so that it can query a Cloudflare endpoint to make sure Cloudflare did not mess with said website app?

                                                                                                              And also: do they really direct my Firefox browser to a Chrome extension in the same article where they argue that ‘Security should be convenient’?

                                                                                                              1. 8

                                                                                                                I believe you misunderstood and the main picture of the article explains it actually really well.

                                                                                                                • Whenever WhatsApp is making a new release of their website, they upload the cryptographic hashes of the assets to cloudflare
                                                                                                                • Whenever you open the WhatsApp website, the “Code Verify” extension will be able to compare the software delivered from WhatsApp with a hash from a “trusted third-party” (Cloudflare), to ensure it is indeed the latest and has not been tampered with.

                                                                                                                I suppose the idea is that internet access to Cloudflare is even harder to mess with undetected than WhatsApp and through their partnership, WhatsApp is profiting from their reach.

                                                                                                                1. 7

                                                                                                                  Let me explain my.thought process a bit, because I still don’t get it.

                                                                                                                  Without Cloudflare:

                                                                                                                  • WhatApp releases a new version of their app.
                                                                                                                  • A user requests it straight from WhatsApp’s server.
                                                                                                                  • The user trusts the app, because they requested it over HTTPS and the certificate belongs to WhatsApp. So they got it directly from the people that made the app and no-one could interfere.

                                                                                                                  With Cloudflare before the solution described in the article:

                                                                                                                  • WhatsApp releases a new version of their app and makes that available through the Cloudflare CDN.
                                                                                                                  • A user requests it from this CDN, over HTTPS, but doesn’t see WhatsApps’s certificate. Instead it is from Cloudflare. Hmm… Who are these people? Can they be trusted to serve the right app? Are rheir processess secure and in order?

                                                                                                                  With the solution described:

                                                                                                                  • WhatsApp releases a new version of their app, makes that available through the Cloudflare CDN, also uploads the hash to that special hash thingie
                                                                                                                  • A user requests it… yadayada. Can these Cloudflare people be trusted?
                                                                                                                  • Yes! Because beforehand, the user installed a Cloudflare extension and that extension now checks whether the hash from the app matches with what the Cloudflare hash thingie says it should be.

                                                                                                                  So what I mean is: if you suspect that Cloudflare could be compromised in any way, why would this be better?

                                                                                                                  1. 6

                                                                                                                    In the proposed [and current] solution:

                                                                                                                    1. WhatsApp releases a new version of their app, updates resources on their servers/CDN.
                                                                                                                    2. WhatsApp notifies CF of the new version using the dedicated CF endpoint.
                                                                                                                    3. A user requests WhatsApp from WhatsApp’s server. The request is served straight from Whatsapp servers/CDN.
                                                                                                                    4. The (chrome) extension in user’s browser verifies the web app downloaded in the previous step via the dedicated CF endpoint.

                                                                                                                    So steps (1) and (3) are business as usual. Steps (2) and (4) add a further level of security/verification:to deliver a forged/tempered version of whatsapp the attacker should compromise both WhatsApp and CF endpoint. This works under the assumption that the system (from WhatsApp) pushing the hashes to the the CF endpoint is somewhat separated from the system (from WhatsApp) serving the web app.

                                                                                                                    1. 1

                                                                                                                      I believe their solution achieves tamper resistance against active man in the middle attacks for TLS (probably not in most people’s thread model). But they say it’s for an “at risk” user population 🤷‍♂️

                                                                                                                  2. 2

                                                                                                                    I’m just as confused. Once they have an extension to verify the right code is being downloaded, why would CF need to do anything special? It’s just a file mapping the version to a hash, stored in some service - it could live in a tweet if they wanted, as long as the hashes and the code come from different sources/paths.