1. 2

    s/respectfully/respectively/

    1. 1

      ta

    1. 6

      I suggested the “rant” tag. While I share some of the concerns outlined (phone numbers, crypto currency crap, single-control), I can’t really imagine why this article strays from solid reasoning to accusations and name calling. If the author would read this, I’d recommend rewriting this to be taken more seriously.

      1. 5

        I agree. I find it incredibly disingenuous that the author does not mention Signal’s Sealed Sender. One of the only serious technical critiques in this article is that you leak your social graph to Signal, but Sealed Sender actually makes this false in most cases. Maybe that’s harsh, and maybe they just didn’t know, but I feel like if you’re going to make that one of your core arguments you should have done your homework.

        To be clear, Signal isn’t perfect. E.g. in theory they’ve been making changes to lay the groundwork for username-only accounts for a while, but it’s been a long time - where is it? Patent has listed a lot of these issues. But it’s a lot better than this article implies.

      1. 2

        That looks fun. I’m going to swing by. Congrats on the launch.

        1. 4

          This is great! I remember Mozilla making an attempt at time-travel debugging in the browser, I wonder where this effort went.

          It’s also great that they’re describing how their recorder works (here and here). As far as I understand, their approach is much less powerful than the one taken by RR (“Replay’s recorder only works in cooperation with the program being recorded: recordings made with an unmodified program that hasn’t been adapted to use the recorder will almost certainly fail to replay.”, while RR doesn’t need the debuggee’s cooperation), but their integration with interpreted language’s runtime is a serious advantage (although I think Pernosco, which is based on RR, has integration with JS engines).

          1. 3

            These are ex-Mozillians. It’s the continuation.

            1. 2

              That’s what I suspected, thanks for confirming :). Mozilla is the source of a ton of cool tech that came out in the 2010s (Pernosco, Rust, Coqui…). I wonder if we will see the same thing happen in the next decade, now that it has refocused on Firefox.

          1. 9

            Citizen Lab forwarded the artifacts to Apple on Tuesday, September 7.

            Six days later Apple’s released a fix — that’s pretty impressive turnaround for a big company, especially for a patch in a vital low level component like CoreGraphics, on two operating systems.

            1. 5

              7 days is good. but it’s also the longest that e.g., Google Project Zero would tolerate for bugs being widely exploited.

              A former Mozilla exec joked “ten f-ing days” in 2007, we’ve since then improved to “within 24 hours” and kept the next-day promise for contests like pwn2own as well as real world attacks.

              Really hoping this will become an industry norm. We should hold the most valuable companies accountable to their responsibilities. Distributed teams allow for a project to continue non-stop for 24hrs without burning anyone out. CI/CD tooling can help providing a certain release-readiness

              1. 2

                It’s a little harder to patch, test and release an OS instead of just an app.

                1. 1

                  Totally. Their >10k employees also outsize the ~500 people working on the app.

              2. 2

                Searching for jbig2decode mostly finds prior PDF vulnerabilities. I wonder if Apple tried fixing the vulnerabilities or just removed support for parts of their PDF decoder that have the vulnerability. Given that the attack is in the wild I imagine they took the shortest, safest path to get a patch out.

              1. 3
                1. 18

                  Neat idea. I’m not sure this is a captcha, but rather just a rate limiter.

                  1. 13

                    So much this. A proof-of-work scheme will up the ante, but not the way you think. People need to be able to do the work on the cheap (unless you want to put mobile users at a significant disadvantage) and malware/spammers can outscale you significantly.

                    Ever heard of parasitic computing? TLDR: It’s what kickstarted monero. Any website (or an ad in that website) can run arbitrary code on the device of every visitor. You can even shard the work, do it relatively low-profile if you have the scale. Even if pre-computing is hard, with ad networks and live-action during page views an attacker can get challenges solved just-in-time.

                    1. 9

                      The way I look at it, it’s meant to defeat crawlers and spam bots; they attempt to cover the whole internet, they want to spend 99% of their time parsing and/or spamming, but if this got popular enough to prompt bot authors to take the time to actually implement WASM/WebWorkers or a custom Scrypt shim for it, they might still end up spending 99% of their time hashing instead.

                      Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing? I never made $1M a year in my life, probably never will, I would be glad to be able to generate that much value tho.

                      If it comes down to a targeted attack on a specific site, captchas can already be defeated by captcha farm services or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337). Defeating that kind of targeted attack is a whole different problem domain.

                      This is just an alternate approach to put the thumb screws on the bot authors in a different way, without requiring the user to read, stop and think, submit to surveillance, or even click on anything.

                      1. 9

                        This sounds very much like greytrapping. I first saw this in OpenBSD’s spamd: the first time you got an SMTP connection from an IP address, it would reply with a TCP window size of 1, one byte per second, with a temporary failure error message. The process doing this reply consumed almost no resources. If the connecting application tried again in a sensible amount of time then it would be allowed to talk to the real mail server.

                        When this was first introduced, it blocked around 95% of spam. Spammers were using single-threaded processes to send mail and so it also tied each one up for a minute or so, reducing the total amount of spam in the world. Then two things happened. The first was that spammers moved to non-blocking spam-sending things so that their sending load was as small as the server’s. The second was that they started retrying failed addresses. These days, greytrapping does almost nothing.

                        The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. CPU time on botnets is vastly cheaper than CPU time purchased legitimately. Last time I looked, it was a few cents per compromised machine and then as many cycles as you can spend before you get caught and the victim removes your malware. A machine in a botnet (especially one with an otherwise-idle GPU) can do a lot of hash calculations or whatever in the background.

                        Something tells me they will probably give up and start knocking on the next door down the lane. And if I can force bot authors to invest in a $1M USD+ /year black hat “distributed computing” project so they can more effectively spam Cialis and Micheal Kors Handbags ads, maybe that’s a good thing?

                        It’s a lot less than $1M/year that they spend. All you’re really doing is pushing up the electricity consumption of folks with compromised computers. You’re also pushing up the energy consumption of legitimate users as well. It’s pretty easy to show that this will result in a net increase in greenhouse gas emissions, it’s much harder to show that it will result in a net decrease in spam.

                        1. 2

                          These days, greytrapping does almost nothing.

                          postgrey easily kills at least half the SPAM coming to my box and saves me tonnes of CPU time

                          1. 1

                            The problem with any proof-of-work CAPTCHA system is that it’s asymmetric. [botnets hash at least 1000x faster than the legitimate user]

                            Asymmetry is also the reason why it does work! Users probably have at least 1000x more patience than a typical spambot.

                            I have no idea what the numbers shake out to / which is the dominant factor, and I don’t really care; the point is that I can still make the spammers lives hell & get the results I want right now (humans only past this point) even though I’m not willing to let Google/CloudFlare fingerprint all my users.

                            If botnets solving captchas ever becomes a problem, wouldn’t that be kind of a good sign? It would mean the centralized “big tech” panopticons are losing traction. Folks are moving to a more distributed internet again. I’d be happy to step into that world and work forward from there 😊.

                          2. 5

                            captchas can already be defeated by […] or various other exploits (https://twitter.com/FGRibreau/status/1080810518493966337)

                            An earlier version of google’s captcha was automated in a similar fashion: they scraped the images and did a google reverse image search on them!

                            1. 3

                              I can’t find a link to a reference, but I recall a conversation with my advisor in grad school about the idea of “postage” on email where for each message sent to a server a proof of work would need to be done. Similar idea of reducing spam. It might be something in the literature worth looking into.

                              1. 3

                                There’s Hashcash, but there are probably other systems as well. The idea is that you add a X-Hashcash header with a comparatively expensive hash of the content and some headers, making bulk emails computationally expensive.

                                It never really caught on; I used it for a while years ago, but I’ve never received an email with this header since 2007 (I just checked). It seems used in Bitcoin nowadays according to the Wikipedia page, but it started out as an email thing. Kind of ironic really.

                                1. 1

                                  “Internet Mail 2000” from Daniel J. Bernstein? https://en.m.wikipedia.org/wiki/Internet_Mail_2000

                              2. 2

                                That is why we can’t have nice things… It is really heartbreaking how almost all technology advance can and will be turned for something evil.

                                1. 1

                                  The downsides of a global economy for everything :-(

                              3. 3

                                Captchas are essentially rate limiters too, given enough determination from abusers.

                                1. 4

                                  Maybe. The difference I would make is that a captcha attempts to assert that the user is human where this scheme does not.

                                  1. 2

                                    I mean, objectively, yes. But, since spammers are automating passing the “human test” captchas, what is the value of that assertion? Our “human test” captchas come at the cost of impeding actual humans, and are failing to protect us from the sophisticated spammers, anyway. This proposed solution is better for humans, and will still prevent less sophisticated attackers.

                                    If it can keep me from being frustrated that there are 4 pixels on the top left tile that happen to actually be part of the traffic light than by all means, sign me the hell up!

                              1. 2

                                on Gnome:

                                1. 1

                                  Anyone who has ever been on the receiving or sending end of vulnerability disclosure knows. there is a lot of room for improvement. There’s a lot of value in studying the process, identifying common issues and devising widely-applicable suggestions.

                                  1. 6

                                    Disgusting work. And the security angle is pure BS. Its entire purpose is to deny us our computing freedom. To “protect” the code from us like we’re some adversary.

                                    This will only be used by scammers and those who now try to block right clicks with an alert()

                                    1. 6

                                      I don’t think this is about how to make your website more secure. This is a “hey, here’s shit evil people could do, you need to be aware of it” kind of thing.

                                      1. 5

                                        Disgusting work.

                                        I think this is great, assuming browser vendors are willing to fix it. @freddyb Do you know if firefox is vulnerable to this too?

                                        1. 3

                                          I think treating this sort of thing as a vulnerability that can be fixed is a losing battle.

                                          1. 4

                                            Why ? Browser vendors are already implementing pretty good js environment segregation for webextensions, I can’t imagine why they wouldn’t be able to do the same for debuggers.

                                            1. 2

                                              I think those issues can be treated as fixable, but I don’t think they will all be fixed. Most of the things in part 2 are about calling into site-code (e.g., overridden prototypes), which I consider possible. But some of the things posted here (and in part 1) are hard to resolve. Especially when they cause additional second-level side-effects like source map URLs, the layout shift that comes from enabling DevTools etc.

                                              I’ll try to get a definite answer from the team though :)

                                            2. 1

                                              if that’s the case then it’s an admission of defeat

                                        1. 2

                                          To be honest, reading this, I gained more sympathy towards the Apple arguments. The last thing I want is browser vendors ramming more “standards” through and circumventing release policies to do so.

                                          Web developers want browser diversity, as long as that browser has exactly the same policies as Chrome.

                                          1. 2

                                            The RAM thing is neat, I’ll give you that. But do you remember how bad web and browser security was in the late 90s? A single website could take over your whole computer and everyone was vulnerable because you basically only had IE. That’s the browser security story of iOS and it’s also the OS security on iOS because their sandbox is subpar.

                                            I strongly believe people would be better off if there was true choice on iOS. But hey, I’m totally biased, I’ll admit.

                                          1. 28

                                            I daydream about writing a web browser. Make network requests, parse, render. A modern. competitive browser would be more than a lifetime of work, but getting something that can halfway render an ugly version of simpler pages (say, this site) seems like a fun couple-month project.

                                            1. 8

                                              You might be a little late to the party, but SerenityOS is building just that. I forgot how its called, but they are building their own web browser from scratch. May be you can still find some interesting issues to work on (I don’t know if it can render lobste.rs currently). At least it may serve as inspiration and proof that it is possible :)

                                              1. 1

                                                Same! I don’t belive it has to be “modern/competitive”, https://suckless.org/project_ideas/ recommends

                                                … a very good knowledge in web standards and how to strip them down to the suckless level

                                              2. 5

                                                Someone showed up in a chat I’m in and started doing just that. For a school project, they said.

                                                I was wailing and warning and explaining but now they can parse HTML, CSS and started doing layout. Youth must have been fun. ;)

                                                Fwiw, we pointed them to http://browser.engineering/ and https://htmlparser.info/ and https://limpet.net/mbrubeck/2014/08/08/toy-layout-engine-1.html

                                                1. 5

                                                  Sounds like netsurf

                                                  1. 5

                                                    Or dillo, or links. I really loved using the latter under NetBSD on my old underpowered iBook G4. That machine was so slow that Firefox was a total resource hog (and it occasionally had weird issues where colors would be inverted due to some endian issue). Dillo development seems to have stagnated unfortunately - I thought it was a really exciting project when I first learned about it (circa 2003).

                                                  2. 3

                                                    I’ve actually kinda done that, back in 2013ish. It was able to more-or-less render my old website http://arsdnet.net/htmlwidget4.png and even tried to do dlang.org http://arsdnet.net/htmlwidget3.png

                                                    I wrote a dom.d module that parses all kinds of trash html and can apply css and do form population etc, and a script.d that can run code to play with that dom, and simpledisplay.d that creates windows… so then thought maybe it wouldn’t be too hard to actually render some of that, so I slapped together https://github.com/adamdruppe/arsd/blob/master/htmlwidget.d too. But like I didn’t have a very good text render function at the time, which is why it is a particular font with some weird word wrapping (it wanted each element to be a single rectangle). The script never actually worked here.

                                                    The table algorithm is complicated too, so I just did a simplified version. Then of course the css only did basics, like the float left there was the bare minimum to make that one site work. But still…. it kinda worked. You could even type in forms

                                                    I’m tempted to revisit it some day. I’ve expanded my library support a lot since then, could prolly do a better job. Realistically though links and dillo would still be better lol. but still it is kinda cool that I got as far as I did. The only libraries used if you go down are basic win32/xlib and of course bsd sockets. The rest is my stuff (all in that same repo btw)

                                                    1. 2

                                                      Did you see https://serenityos.org/ already? They are re-writing an OS and a Browser from scratch. It looks like a lot of fun.

                                                    1. 2

                                                      Feel like asking the author if they have ever heard of pandoc… But maybe I misunderstand the project.

                                                      1. 6

                                                        I want to paraphrase this thread, because this is important:

                                                        • This is all normal. We’ve all felt it once or even much more often.
                                                        • find something to turn the brain off. For some it’s meditation, cardio, music, socializing. doesn’t matter find something that works.
                                                        • Your work’s output is better when you are rested.
                                                        • No matter how much you give, the company can always ask for more.
                                                        • set boundaries to protect yourself.

                                                        stay (mentally) safe :)

                                                            1. 3

                                                              The blog post actually mentions this:

                                                              As one of the trailblazers of location-based online dating, Tinder was inevitably also one of the trailblazers of location-based security vulnerabilities. Over the years they’ve accidentally allowed an attacker to find the exact location of their users in several different ways.

                                                            1. 6

                                                              Also a great workaround for paywalls, as long as you click early enough… @frenkel: What’s up with the huge spaces next to the apostrophes? Screenshot at https://imgur.com/K7lF5dU

                                                              1. 3

                                                                If you do it too late, just reload the page while still in Reader Mode. You usually get the full article.

                                                                1. 1

                                                                  If you’re on firefox, Open in Reader View can be wonderful.

                                                                  1. 1

                                                                    Sadly not available for Android Fx either, but yes.

                                                                2. 2

                                                                  Wow, that’s weird, thanks for the screenshot. What browser and OS are you using? It seems a fallback font is used, maybe it’s a font issue.

                                                                  1. 6

                                                                    For me it’s because the font-family is defined as Microsoft YaHei,微软雅黑,宋体,STXihei,华文细黑,Arial,Verdana,arial,sans-serif. This looks to be coming from your Jekyll theme. YaHei is a Simplified Chinese font, so it’s not really great for displaying content primarily written using the Roman alphabet.

                                                                    1. 1

                                                                      Thank you, I’ve removed YaHei and it fixes the problem indeed. A hard-refresh might be required.

                                                                    2. 1

                                                                      Firefox Nightly, Ubuntu Linux. It falls back to sans-serif.

                                                                      1. 2

                                                                        That’s weird, the problem was caused by Microsoft YaHei. Is it gone now? A hard-refresh might be required.

                                                                        1. 1

                                                                          confirmed fixed. thanks!

                                                                      2. 1

                                                                        Same problem on Chrome and Ubuntu MATE.

                                                                        1. 1

                                                                          Should be fixed! A hard-refresh might be required.

                                                                    1. 7

                                                                      Someone did a very good dive into the archives (starts here https://twitter.com/sirdarckcat/status/1429833886385725441) and it looks like the solution has been described years ahead of filing. IANAL, but hope there’s enough ammunition there to invalidate the patent.

                                                                      1. 2
                                                                        1. 1

                                                                          I don’t think, and I am definitely not a patent lawyer, that is necessarily “prior art” in the strictest sense - that’s controlling whole script files. Their claims are that their server-generated secrets “reconfigure” the environment for the scripting language according to those secrets allowing different “scopes” of access to different “identities”.

                                                                          But I wouldn’t be surprised if there’s more prior art out there - it’s not like permissioned conditional execution of chunks of code was a new idea in 2011 (although the replacing of functions to do that might be?)

                                                                      1. 2

                                                                        Is there any reason for randomizing, or even rotating, the CA? I don’t understand the reasoning for it. It seems entirely unrelated to the “let’s encrypt can go down” scenario.

                                                                        1. 12

                                                                          If you always use LetsEncrypt, that means you won’t ever see if your ssl.com setup is still working. So if and when LetsEncrypt stops working, that’s the first time in years you’ve tested your ssl.com configuration.

                                                                          If you rotate between them, you verify that each setup is working all the time. If one setup has broken, the other one was tested recently, so it’s vastly more likely to still be working.

                                                                          1. 2

                                                                            when LetsEncrypt stops working

                                                                            That’s how I switched to ZeroSSL. I was tweaking my staging deployment relying on a lua/openresty ACME lib running in nginx and Let’sEncrypt decided to rate limit me for something ridiculous like several cert request attempts. I’ve had zero issues with ZeroSSL (pun intended). Unpopular opinion - Let’s Encrypt sucks!

                                                                            1. 5

                                                                              LE does have pretty firm limits; they’re very reasonable (imo) once you’ve got things up and running, but I’ve definitely been burned by “Oops I misconfigured this and it took a few tries to fix it” too. Can’t entirely be mad – being the default for ACME, no doubt they’d manage to get a hilariously high amount of misconfigured re-issue certs if they didn’t add a limit on there, but between hitting limits and ZeroSSL having a REALLY convenient dashboard, I’ve been moving over to ZeroSSL for a lot of my infra.

                                                                            2. 2

                                                                              But he’s shuffling during the request-phase. Wouldn’t it make more sense to request from multiple CAs directly and have more than one cert per each domain instead of ending up with half your servers working?

                                                                              I could see detecting specific errors and recovering from them, but this doesn’t seem to make sense to me :)

                                                                            3. 6

                                                                              It’s probably not a good idea. If you have set up a CAA record for your domain for Let’s Encrypt and have DNSSEC configured then any client that bothers to check will reject any TLS certificate from a provider that isn’t Let’s Encrypt. An attacker would need to compromise the Let’s Encrypt infrastructure to be able to mount a valid MITM attack (without a CAA record, they need to compromise any CA, which is quite easy for some attackers, given how dubious some of the ‘trusted’ CAs are). If you add ssl.com, then now an attacker who can compromise either Let’s Encrypt or ssl.com can create a fake cert for your system. Your security is as strong as the weakest CA that is allowed to generate certificates for your domain.

                                                                              If you’re using ssl.com as fall-back for when Let’s Encrypt is unavailable and generate the CAA records only for the cert that you use, then all an attacker who has compromised ssl.com has to do is drop packets from your system to Let’s Encrypt and now you’ll fall back to the one that they’ve compromised (if they compromised Let’s Encrypt then they don’t need to do anything). The fail-over case is actually really hard to get right: you probably need to set the CAA record to allow both, wait for the length of the old record’s TTL, and then update it to allow only the new one.

                                                                              This matters a bit less if you’re setting up TLSA records as well (and your clients use DANE), but then the value of the CA is significantly reduced. Your DNS provider (which my be you, if you run your own authoritative server) and the owner of the SOA record for your domain are your trust anchors.

                                                                              1. 3

                                                                                There isn’t any reason. The author says they did it only because they can.

                                                                                1. 2

                                                                                  I think so. A monoculture is bad in this case. LE never wanted to be the stewards of ACME itself, instead just pushing the idea of automated certificates forward. Easiest way to prove it works is to do it, so they did. Getting more parties involved means the standard outlives the organization, and sysadmins everywhere continue to reap the benefits.

                                                                                  1. 2

                                                                                    To collect expiration notification emails from all the CAs! :D

                                                                                    1. 2

                                                                                      The article says “Just because I can and just because I’m interested”.

                                                                                    1. 8

                                                                                      I remember learning about other CAs that support ACME several months back from a Fediverse admin. I’m really glad there are alternatives. Mozilla did the right thing by making the entire process open. I feel like this is more important that ever.

                                                                                      Mozilla has had financial troubles, and although it’s unlikely they would lose funding for LetsEncrypt, they certainly could. Second, Mozilla has made a lot of questionable political decisions, and has made it clearly they care a lot about politics internally within the non-profit. Having alternatives is essentially for the day when Mozilla says, “We refuse to grant you a TLS certificate because of what’s hosted on your domain.”

                                                                                      1. 15

                                                                                        Mozilla helped bootstrap Let’s Encrypt, with money, staff and expertise but Let’s Encrypt is a completely independent entity for a while now.

                                                                                        1. 6

                                                                                          Mozilla helped, but Linux Foundation did more in terms of staffing.

                                                                                          Source: Was hired by Linux Foundation to work on LE, back in 2016.

                                                                                        2. 9

                                                                                          Mozilla does not own Let’s Encrypt directly, it’s a non-profit.

                                                                                          The EFF is a sponsor, so denying someone a cert for political reasons will be a hard sell to them.