1. 38
  1.  

  2. 9

    Nice. I hope this pushes more sites to avoid public asset CDNs and become more autonomous.

    1. 1

      a nice side effect of this may be that people are forced into better streamlining the resources they use because they can’t rely on the cache as a crutch.

      1. 3

        The fun thing is that the concept of a shared CDN cache never worked well anyway. There are too many versions of libraries, spread over on too many CDNs, to get any decent cache hit ratio. Browser caches are relatively small and short lived due to the amount of data browsers churn through. And a shared CDN adds an extra cost of a DNS + TCP + TLS latency on the critical path, so it would have to have a really great cache hit ratio to make up for the loss.

    2. 3

      Why not allow it to be controlled by a response header (with a safe default when the header is missing)? I’ve always thought highly of the browser shared cache. In an ideal world, the shared cache turns the browser into a dynamically evolving foundation. I’d be sad to see it go away.

      1. 4

        Whose response? The CDN or the first-party?

        It depends on the first-party to judge whether it’d be safe to retrieve from cache or not. The first-party website would need a mechanism in page to make that statement. (Either as an attribute for each element (yuck) or a site-/document-wide policy in a header).

        That might work. I’m not sure.

        But can you give an example where you’d certainly know that it’s OK to opt-in to timing side channels?

        1. 3

          I think the best example of where it’d be okay is in commonly used libraries hosted on popular CDNs. “The user has been on another website which uses jQuery” isn’t such a bad information leak. (Though even then, an attacker could probably figure out exactly which versions of which popular libraries you have cached, and from which CDNs, and use that to build a fairly good picture of the set of popular websites you’re likely visiting.)

          1. 7

            The challenge with this is: Who decides what resources should get that treatment? The first-party website has an incentive to allow its advertising partners to track the user. The CDN operator has even more incentive. Some websites, and some CDNs, might be trustworthy enough not to use the flag gratuitously… but if we allow them to self-designate as such, we’re right back where we started.

            1. 2

              It might work like the media permissions: a website may request a shared cache for some resources and the user may allow them (for this and every following website/request). That being said, it would be too annoying to actually work.

              1. 1

                Yeah, I think that’s at least a feasible approach, if not for this particular optimization, maybe for other privacy/performance trade-offs.

              2. 2

                I don’t know how it would work in practice, and to be clear, I don’t think it’s a good idea. That’s just the only example of a situation where it could in principle make sense, regardless of whether it would work in practice.

                1. 1

                  Yes, I certainly think it’s a scenario worth thinking through. Thank you for that.

          2. 3

            Partitioning is done for privacy. It won’t help if the bad guys can set a I'm-not-a-bad-guy header. If you’re hoping they wouldn’t be so brazenly and openly violating privacy, see what Google did with the P3P header.