1. 88
  1.  

  2. 40

    One of the advantages of being a service is that bugs can go from reported to fixed in minutes to hours instead of months. The industry standard time allowed to deploy a fix for a bug like this is usually three months;

    Ahh, there’s the marketing blurb. Not mentioned: all the people who would not have been affected had they not used a centralized service.

    1. 6

      Not to mention the people affected who had no idea Cloudflare was even a factor in the services they were using.

      1. 4

        That’s true, although I don’t think it’s practical for most (any) people to evaluate the tech stacks of the sites they use. Who decides to buy a fitbit based on whether it’s rails or django or aws or azure or whatever?

        The bigger concern is that even as a cloudflare customer who didn’t use the bad features, you were still exposed to the damage. This wasn’t a bad rails gem that you could have disabled. Not using every cloudflare feature is not sufficient to protect you from flaws in every cloudflare feature.

    2. 16

      Reading the review, it sounds like you can probably go through Google’s cache or an Internet Archive for the affected pages, and find random (private) HTTPS sessions in the public caches.

      I’m finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We’re talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.

      Unbelievable.

      1. 1

        I believe they waited until major search engines had purged this data from the cache, to make this public. So you could, but can’t anymore.

        1. 13

          Google is actively purging data, but at the time of publication there was still secret data readily discoverable. To say nothing of all the other “not major” search engines which can also have caches. Nobody knows who or where this data has been cached.

          1. 4

            Nope. Read Tavis’s summary in the Google report, also there have been reports on Twitter of data being found in search engine caches

            1. 4

              All search engines and other services that caches things like Yandex, Baidu, NSA and lots of others are probably not so eager to purge their caches/loot.

          2. 5

            Not to be too much of a tinfoil or anything, but if the relevant 3-letter agencies knew about this before disclosure, it must have been an absolute gold-mine. Tinfoil aside, I’d like to see a list of affected sites, even if limited to a Top 100, so that a judgement call can be made by users as to if they want to update their credentials.

            How confident are you that you that you don’t use any services that are backed by CloudFlare, were afflicted, and sensitive data relevant to your usage wasn’t exposed? I’m not.

              1. 3

                This list needs a lot more upvotes.

                The most frustrating part about this entire debacle is that the breadth and depth of (necessary) modern caching is beyond the control or even knowledge of the average Joe on the Internet. What I mean by that is that as someone who (for example) is a customer of DigitalOcean, my credentials and activities there have been compromised by a service I wasn’t even aware was a factor in their operation.

                I’ve reset my password there, but what other data is already out there from my sessions since the fall?

                And now we face the arduous task of resetting however many passwords we have on however many services. Some of which won’t even be affected.

                To be clear, I’m not whining with malice at Cloudflare; shit happens. I’m just reeling at the scale and reach of this.

              2. 5

                Does it really still count as tinfoil to think that 3-letter agencies are involved when there is clear evidence this is something they’re highly involved/interested in? I don’t think you’re being tinfoil as much as just making a good point.

                1. 4

                  Most TLAs are interested in specific targets. They may collect all the traffic for later analysis, but still after targeted individuals. A random collection of dating site messages and Fitbit credentials doesn’t help them. Probably too fragmentary to be useful.

                  How would you use this data? If it were easy to find your arch nemesis in the dump, you could heap shit on them. But if all you’ve got is some uber trip that a random dude in Kansas City took, I guess you could grief him, but to what end?

                  1. 6

                    They may collect all the traffic for later analysis, but still after targeted individuals.

                    Hmm, the idea that they only go after specific targets doesn’t sound right. The XKeyScore stuff showed them running queries against aggregated data, such as keywords entered into search engines.

                    1. 1

                      That’s a reasonable point. Also trying to find targets. Again though, not sure how much a leak like this helps them do that.

              3. 5

                So, it all boils down to a pointer-manipulation error of the kind we’ve been able to prevent since 1961. There’s even open languages and tools for this. There’s also secure, parser generators. Just another example of my meme of companies using tools that generate security problems by default instead of those that stop them by default.

                1. 4

                  Except that the whole reason this bug was exposed and became active was because they were migrating away from the bad tool to much better tooling. This is where operational reality counters any amount of supposed theory: you have to be able to get there from here and accept that sometimes the journey will be painful.

                  CF are on record as having switched much of their newer tooling to Golang, which avoids this issue entirely; so they’re obviously aware of the issue and are working to migrate away.

                  1. 2

                    “This is where operational reality counters any amount of supposed theory: you have to be able to get there from here and accept that sometimes the journey will be painful.”

                    Part of my comment was that secure, parser generators exist. The journey toward secure parsing is less painful when using or improving on those.

                    “CF are on record as having switched much of their newer tooling to Golang, which avoids this issue entirely”

                    There’s a positive. Possibly best choice if balancing simplicity, speed, and ecosystem.

                2. 3

                  It seems that using Cloudflare for SSL traffic was obviously an awful idea. Can someone correct me if my understanding is wrong?

                  From Cloudflare’s documentation https://www.cloudflare.com/ssl/keyless-ssl/ I deduce that

                  1. Traditionally, Cloudflare customers simply gave Cloudflare their key. Awful idea.

                  2. Then Cloudflare introduced “Keyless SSL”. The customer keeps their key private but happily signs anything that Cloudflare asks it to. Awful idea.

                  How is using Cloudflare for SSL traffic anything other than terribly negligent? If I see SSL traffic signed with abc123.com’s certificate I assume that no one in the middle is intercepting that traffic before it gets to abc123, but with Cloudflare in the middle the certificate signature is simply a lie.

                  Awful.

                  Someone please tell me I’m wrong. This is just grim. Not (just) the bug. Cloudflare is grim.

                    1. 1

                      ed. This was merged, the original article pointed to Tavis’s bug report, here: https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

                    2. 2

                      A friend of mine posted that it’s a good idea to log out of every site and change your password. See you all on the other side.

                      1. 1
                          1. 1

                            Wow, Ragel looks sweet. Why haven’t I used ragel. What is wrong with me writing those things by hand.

                            1. 2

                              They often have vulnerabilities in them. Using parser or protocol generators gives you better security and productivity. A few to look into are Hammer, its successor Nail, and Cap n Proto. Tools such as CRYPTOL synthesize implementations from specs of algorithms (mainly crypto). Even more in formal verification community but the ones above are usable by masses.