1. 56
  1.  

  2. 30

    Note that all caching advantages are gone now.

    Browsers don’t reuse cached 3rd party resources any more. Caching is partitioned per top-level origin, and browsers will intentionally download redundant copies of 3rd party scripts. This prevents cross-site tracking via cached scripts.

    From performance perspective script CDNs are pure negative now. You pay cost of additional DNS+TCP+TLS connections, and lose HTTP/2 prioritization against 1st party resources.

    1. 5

      Although, truth be told, a lot of those downsides are alleviated with various DNS prefetching techniques, TLS 0-RTT handshakes and vastly improved TCP stacks in mainstream CDN deployments nowadays.

      1. 3

        So in the best case it can be improved from being worse than 1st party to still worse than 1st party.

        1. 1

          The point is in being able to improve all parts of the “stack”. Being worse than 1st party to still worse than 1st party doesn’t necessarily need to be bad/unoptimized/slow.

      2. 5

        The whole “performance” thing was always complicated, with or without caching. It certainly can be faster, but there are a bunch of factors; already back in 2012 when we were using a CDN at my job at the time I found it wasn’t really all that much faster in practice. To quote some things about it that I wrote back then:

        • What if my website specifically targets Dutch users and my server is in the Netherlands? Is it still faster for all my Dutch users?

        • CDN performance may not be consistent. One particular location may be blazing fast, and another may be very slow (and how do you know which locations are slow?)

        • A CDN also introduces performance overhead in the form of a DNS request, a new TCP connection, and possibly new TLS negotiation; so a CDN has to not only be faster, it has to be fast enough to offset for this.

        • Average load times are nice, but what about the worst possible load time? In my experience, this is often a lot worse with CDNs. Like with most things in life, a single ‘average’ number is pretty useless.

        CDN performance is pretty hard to measure. In fact, I can’t really find any good figures on the web that aren’t produced by a CDN provider (and thus not reliable). My personal experience is checkered, and while a CDN can most certainly improve the load performance, I would be careful in just assuming so.

        It is already very easy to serve static content with very high performance using tools such as Varnish. In many cases, this is about as fast (and certainly ‘fast enough’).

        And not all “caching” is the same; Netlify implements caching with E-Tags and your browser still sends a request with If-None-Match. This is certainly better than no caching at all, but it’s not the same as directly serving a file locally based on an expiry date, which is much faster and the difference is noticeable in some cases (on the other hand, cache invalidation is harder with locally served files, especially for something like /index.html).

      3. 14

        That’s the complaint I had too in ~2013. The then chair of the w3c webappsec working group convinced me (and my collaborators) to create and specify Subresource Integrity. The lesson is, never complain about stuff :P

        On a more serious note, maybe we ought to reconsider CDNs and SRI now that browsers are all going to stop sharing caches and double key every cache entry by its first party origin.

        Firefox has a pref, “first party isolation”, that you can experiment with.

        1. 3

          I wonder what the perf hit here is. Has Mozilla collected any metrics? With http2 the cost of establishing additional connections to the origin are gone but bytes still take time.

          I guess with SRI you need to wait until all the bytes are down to start processing but without (eg: same origin) parsing and evaluation can happen in parallel…

          1. 2

            Can’t speak from an implementors perspective here. I’m not involved on these specific bits, but I believe some things privacy & security are worth it, as long as the cost isn’t totally off the scale.

          2. 2

            now that browsers are all going to stop sharing cache

            This would also make adding dependencies no longer zero cost, because you can’t expect e.g. jquery to be cached?

          3. 5

            I think this article is missing the point of using packers (webpack, parcel) and tree shaking. If you don’t rely on a CDN you can optimize your bundle with only the required code.

            1. 3

              This is interesting ’cause it highlights larger problems, actually. Several of their issues – versioning, security, caching – would all be improved with a content-addressed system of some kind.

              1. 3

                Unfortunately hugely incompatible with Subresource Integrity :-(

                We tried to shoehorn it into the spec, but it would lead to some sad (or funny) CSP bypasses. I guess you’ll need another URL scheme for it to work out.

                1. 1

                  Subresource integrity is a new feature to me, so I’m not sure how it would work. It looks like it tries to do something like it already, if I’m interpreting that hash correctly. I’ll have to read up on it.

              2. 3

                I wonder how much of this can be prescribed to web developers being lazy, or for whatever other reason not wanting to host the JS code on their own servers? Because it should be obvious, no matter what the downsides are, just adding a <script src="..."> to the head, is pretty easy.

                1. 12

                  adding a <script src="..."> to the head

                  Tip: if you’re writing script elements without an async or defer attribute, add them to the bottom of body rather than to the head. When browsers parse HTML, as soon as they find a script element with neither of those attributes, they pause parsing the rest of the page in order to parse and execute that script. So if the script is in the head, that means users will have to wait longer to see the body.

                  Another disadvantage of scripts in the head is that they break if they refer to an element on the page without waiting for the DOMContentLoaded element. This problem can also be solved by adding the defer attribute.

                  1. 4

                    Sorry, but how is that related to what I’m saying? :/

                    1. 7

                      You said web devs should add a script tag to the head. I found useful the reply detailing why that isn’t best practice.

                      1. 1

                        My point was that it’s easer to just reference a CDN (along the lines of what @pbsds also said), than to host it yourself, which was a point that the article didn’t seem to comment on. Where specifically the script tag is added is unrelated, as far as I understand what @royokane wanted to say.

                        1. 3

                          My comment wasn’t meant to refute or support your main point. I just wanted to highlight a best practice that you, and perhaps other people, didn’t know of.

                  2. 3

                    I just don’t want to run npm or store the minified bundles in my repos

                    1. 2

                      Nothing wrong with being lazy :-) I think it all depends on what you’re using it for. When I create a “serious” website I almost always self-host everything; but sometimes I create simple static (sometimes temporary) websites that consist of just a single HTML page, and I don’t really see a problem with being lazy and using a CDN or Google fonts or whatnot in those cases.

                      But external JS on your payments page? Yeah, that’s just stupid. Years ago I found that a server admin tool did the same on their backend interface where you could manage your entire server. This was 6 years ago, so hopefully it’s better now (I’m not 100% sure which one it was, only 90% sure, so I’d prefer to avoid name-and-shaming it since I might name the wrong one).

                    2. 1

                      This article pretty much overlaps with my attitude towards 3rdparty loaded stuff.

                      But, if you really have to load something from CDN, just put it on a subdomain of your org and set up reverse proxy for it. For example, if your site is babecook.com, just add scripts.babecook.com or assets.babecook.com.

                      Not bbcdn.com, babeasetts.com or babecookusercontent.com. How the hell does that even add up in yearly domain / SSL cert costs?

                      This approach opens several possibilities:

                      • We could finally make a “first party only” policy working for real and being able to browse sites when all other doamins are denied.|
                      • You can replace CDNs without any issues or code updates, and you can even store content on multiple CDN providers round-robin switching for cost optimization
                      • You would spend less on domains and SSL wildcards
                      • If something goes wrong, you can simply start to host assets on your own or move infrastructure behind that into whatever you want as long as domain stays unchanged
                      • This adds a somewhat cleaner visual indication of what is loaded or not, for example: scripts.babecook.com/js/jquery/1.2.3/jquery-1.3.4-min.js is much more understandable than bbcdn.net/5d41402abc4b2a76b9719d911017c592.js.
                      1. 4

                        As a general security principle, you don’t want to let potentially active content controlled by someone else be served from your domain or any subdomain thereof. This is why so many sites use a different domain (or sometimes just different TLD with same “brand” name, like github.com versus github.io) for their asset hosting/any user-generated content.

                        1. 3

                          Not bbcdn.com, babeasetts.com or babecookusercontent.com. How the hell does that even add up in yearly domain / SSL cert costs?

                          They’re doing that because they want to avoid sending cookies for every tiny avatar pic. Being in a different origin is literally the whole purpose of doing it that way.

                          1. 3

                            It’s a security thing. You don’t want to host potentially active user content under a domain that has auth cookies.

                            1. 2

                              HTTP/2 has reversed a lot of such best practices. Now cookie headers cost nothing thanks to HPACK, sharding is obsolete, extra domains add roundtrips and interfere with H/2 prioritization.

                            2. 1

                              How the hell does that even add up in yearly domain / SSL cert costs?

                              Probably about $10 per domain per year in total?