1. 54
    1. 57

      ironically, posting this here will have provoked it again from the various bots that post lobsters submissions on mastodon.

      That said, the amount of complaints about this from technical sites does surprise me. Yes, it’s an annoyance, and solutions should totally be (and are) explored, but even minimal caching or filtering can easily mitigate it. A surge of requests for a specific URL, unauthenticated and clearly identified in requests headers is the best possible case. So I don’t think it’s unreasonable that this is not a top priority fixed immediately, and this article could’ve been “here’s the problem and here’s the config snippets we used to mitigate it”

      1. 43

        For the record, I got this link from the official Lobsters bot on Mastodon. ;)

        1. 2

          I think it won’t - most mastodon servers have probably already generated the preview the first time this happened, so you’re seeing that one.

        2. 9

          The problem is probably the hosting that is available to some. If you would run this from your home server, it won’t be nice to have this amount of traffic at once. Same if the hosting has some low traffic per month limit. And if you do that via AWS, you may pay based on the amount of traffic. So mastodon instances acting like a herd of bots all individually fetching the preview is obviously not nice. Especially if we do hope to have more mastodon instances in the future.

          Then again I totally agree that you should be able to deal with the traffic it produces right now. If that link lands in the top spot of HN or twitter the traffic will be much higher.

          1. 18

            right, the edge cases exist, and if we can avoid it we shouldn’t blow hobbyists with raspberry pis on their home internet, or small non-technical businesses on crappy shared hosting, or people who just made a mistake in their setup out of the water by accident, so any progress on this is welcome.

            But itsfoss.com and other technical bloggers are fairly sure not in those categories, and it’s kind of weird to see posts like this presenting it like it’s causing them downtime and they couldn’t easily fix it by applying some best practices.

            1. 15

              If you would run this from your home server, it won’t be nice to have this amount of traffic at once.

              Once you’re exposing web services to the public internet, this is really out of your control.

              Same if the hosting has some low traffic per month limit.

              All the still-existing cheap PHP shared webhost providers don’t bother metering traffic if you’re not in the terabytes.

              And if you do that via AWS, you may pay based on the amount of traffic.

              Don’t choose AWS then. It’s not a cheap option! That said, rounding up to 115MB, if this happened every hour for a 31 day month (for a total of 83.56GB) to the most expensive egress destinations, you’re looking at $10.27 (rounding away from zero). Given that you’d be hoping that doing so would serve up more ad-paying impressions that bot traffic, it shouldn’t really be a problem.

              1. 3

                traffic per month limit.

                do the math and configure your webserver accordingly.

                1. 17

                  webserver

                  And maybe make it so that your site doesn’t transfer 3.3 megabytes of data (with adblocking enabled!) to display 25 kilobytes of HTML.

                  1. 10

                    Ironically, for the topic of the post it shouldn’t even matter as the crawler should just be looking for OpenGraph tags in the HTML, and then just whatever preview image they’ve put in there.

                  2. 4

                    You mean “slap cloudflare on it and be done”. Because that’s what usually happens.

                    1. 9

                      Funnily enough, the itsfoss operators in question here did what both of your comments mention - misconfigured their webservers and slapped cloudflare on it.

                      1. 3

                        no, I don’t mean offloading to a third party.

                  3. 2

                    I agree, but it occurred to me that you could make your own mastodon DDOS bot network by signing up accounts on every node you can find and making them follow each other such that you need only share a link to your target and all of the nodes would essentially attack it.

                    Even though everyone running a website ought to be thinking about DDOS prevention, it definitely seems like something Mastodon should patch, right? Also, DDOS prevention in many cases means signing up for a centralized service like CloudFlare, which seems to contradict the open, decentralized ethos of the Internet.

                    1. 6

                      Theoretically, yes, but I doubt it’s particularly effective, the fediverse is not that large. If you want to send a few thousands requests over a few minutes to a site, you can also just take a random VPS and run curl or wget in a loop, much easier than signing up to a pile of fediverse instances (and I suspect the long tail of instances is private/friends-only, so you can not create accounts on that many of them). And the traffic Mastodon generates is easily identified and filtered. Hence I don’t think there is much potential for it to be “weaponized”, and if it were we’d be seeing that already.

                      I suspect most “DDoS protection” services wouldn’t even trigger on this amount of traffic.

                      As I’ve said repeatedly in this thread, I think its worth looking into improving this, but it’s not that much of a deal as the article suggests, which seems really weird given who seems to be running this site.

                  4. 50

                    The Mastodon “everybody downloads the page at once” behavior is silly, and it will get worse as the network grows. But, at LWN at least, the thundering herd isn’t really a challenge to cope with. Some server performance tuning may well be in order.

                    1. 29

                      Agreed. I notice that this site sets max-age=0 in its cache-control header, and the Cloudflare-specific cf-cache-status header is set to “DYNAMIC”. An unauthenticated request to an article like this should be served from Cloudflare’s cache, and possibly even pre-rendered on the origin.

                      1. 27

                        Yeah, I’m having trouble wrapping my head around “Mastodon servers cause us 30 minutes of downtime per week” when combined with “we use Cloudflare as our CDN.” What?!

                        I suppose they could be having downtime for reasons unrelated to the large numbers of Mastodon requests. But either way, it seems like something is not configured properly here—maybe the CDN and the upstream server.

                        1. 2

                          This probably explains why:

                          https://developers.cloudflare.com/cache/how-to/edge-browser-cache-ttl/

                          For a blog site, you probably don’t want a 1-2 hour TTL, and I doubt most CloudFlare users are there for the “starts at $200/month” option. Of course, individual posts could have a 1-2 hour TTL and only have the front page be served dynamically, but then edits still take a while to propagate.

                          Me being silly. See below.

                          1. 15

                            There’s an option to respect origin server headers. Setting cache to 1 second would have been enough to avoid getting hammered (if their server can’t serve 1 page per second, they need to upgrade their Commodore 64).

                            1. 5

                              D’oh, totally forgot this.

                              But yeah, even a really short TTL should vastly mitigate OPs issue. I don’t think a blog has much of a need for a TTL of under even 60 seconds.

                              1. 5

                                Cloudflare isn’t one cache, it’s a bunch of caches that independently call back to your server, so a TTL of 1 is still many more requests per second than 1.

                                Still, not enough that they shouldn’t be able to serve them…

                                1. 1

                                  I think they offer “tiered cache” to avoid this problem. But maybe only on paid plans.

                                  Basically they nominate a single one of their datacenters to make origin requests and their other PoPs go via that datacenter.

                        2. 27

                          I have a hard time believing this is an actual problem.

                          Sure, I know the effect. If my content gets shared on Mastodon or if I do that myself, and I have a look at the server logs, I see plenty of mastodon bot accesses. But they never get anywhere close to a volume that would worry me. And no, I don’t use a CDN.

                          It’s also not clear how it should be “fixed”. You could have only the original mastodon server load the preview, but then essentially all other mastodon servers would trust that server. That seems counter to the idea of a decentralized social network where you primarily trust your own server.

                          If those requests cause a problem for you, very likely you have some fundamental performance issue within your website’s software stack.

                          FWIW, there’s another issue with previews that gives you even more traffic bursts: Some social media sites tend to directly use your opengraph images in their html preview (i.e. everyone seeing that post will load the image from your server). I haven’t had any performance problems with that either, likely because none of those social media sites are particularly big. A somewhat noteworthy affected site is post.news. (If I operated a site doing this, I would be worried about privacy implications. I doubt this can be legal in any place where privacy laws exist, as you’re essentially sending the IPs of your visitors to random third parties that you cannot control.)

                          1. 5

                            While I agree that this shouldn’t really be an issue for most sites to handle I think it still makes sense to “fix” it.

                            I see two main solutions:

                            1. Embed a preview in the data from the origin. Pro: One request, Pro: See what the original user saw, Con: May not accurately reflect the linked site.
                            2. Delay fetching to smooth out the traffic. I can imagine adding a heuristic like “how many followers does the posting account have” to scale it, but even that may be overkill. Something like random delay up to sqrt(followers) seems to produce a relatively reasonable curve to spread traffic.
                          2. 66

                            In total, 114.7 MB of data was requested from my site in just under five minutes

                            If a web server has a problem serving 114MB of data in five minutes… maybe improve the web server performance? 114MB is peanuts by modern standards. Expecting a centralized platform to work as a free-of-charge CDN doesn’t seem sustainable at all — how will the server cope if people start clicking on those links en masse?

                            1. 61

                              I find it hilarious the author is complaining about 114.7 MB when they have zero respect for the user’s bandwidth. Open up this page without an ad blocker, scroll through it, and take a look at the network tab:

                              5382 requests | 119 MB transferred | 336 MB resources

                              That was after a couple minutes. It keeps going up with no sign of stopping.

                              1. 17

                                I find it hilarious how few people actually read the article. The quote about 114.7 MB is from a different person who independently observed the effect on a different website.

                                1. 18

                                  I read it. I’m not talking about Chris Partridge, who may legitimately complain about 114.7 MB. I’m talking about the author of the It’s FOSS article Ankush Das, who is sharing the quote as an example of excessive bandwidth, and complaining (hypocritically) that Mastodon is DDoSing It’s FOSS as well.

                                  1. 5

                                    Right, I replied to you in a “things I also find hilarious” sense, not because I think you’re wrong, though I see how that’s unclear. The comments about server performance to me seem to be confusing which site is which.

                                    1. 5

                                      Ah I understand now, that’s fair.

                              2. 14

                                To be charitable to the author, that was a third party engineer’s test and not necessarily representative of the load itsfoss is seeing. The full quote:

                                Quoting Chris Partridge’s (a security engineer’s) older findings, he mentioned:

                                However, I got a bit of a nasty surprise when I looked into how much traffic this had consumed - a single roughly ~3KB POST to Mastodon caused servers to pull a bit of HTML and… fuck, an image. In total, 114.7 MB of data was requested from my site in just under five minutes - making for a traffic amplification of 36704:1.

                                1. 13

                                  The crucial point is that someone needs to serve that traffic anyway, if we assume that link previews are a good thing. Facebook, Ex-Twitter, etc. will query the server of itsfoss.com just like Fediverse instances do. Then they will serve that data to everyone who views the timeline, so the “traffic amplification” is still there — it’s just someone else who picks up the tab. It’s only less noticeable because there are five of them. That is like saying “I want fewer social media platforms on the Internet” without saying that explicitly.

                                  From this perspective, why have a website at all? If you want to offload everything on centralized megacorp-owned platforms, just post to them directly.

                                  This is not to say that link previews cannot be more efficient: I suppose one problem is that a social media server fetches a complete page just to make a low-resolution version of the header image and extract some headings and first paragraphs. Web servers could generate their own previews and provide endpoints for querying just the previews.

                                  But still, if you can’t serve a page to thousands Fediverse servers, you are already in trouble if thousands users come to your page.

                                  1. 9

                                    Web servers could generate their own previews and provide endpoints for querying just the previews.

                                    That’s why we have the opengraph standard: https://www.opengraph.xyz

                                    The site itself can provide og: meta tags with content and the social site should use those to create the preview. It only needs to fetch the raw HTML for whatever link - and to optimise more only the is really needed. You can stop parsing as soon as you hit

                                    1. 2

                                      Worth noting your HTML tags here have been treated as HTML tags and sanitised out. You can escape them like \<this>, or put them in backticks to get inline code spans (i.e. `<this>`).

                                      1. 0

                                        Oh, thanks, I didn’t know about that standard.

                                        1. 7

                                          note though that that is what Mastodon is already using - the complaint is exactly about it fetching the page to look for those meta tags.

                                      2. 4

                                        I don’t think you can assume that link previews are a good thing. Also a service like Facebook or Ex-Twitter will generate the preview once and then use that preview for every other occurrence, which limits the amount of amplification that occurs. The Fediverse doesn’t have that mitigation.

                                        1. 5

                                          I don’t think you can assume that link previews are a good thing.

                                          You can from the perspective of a site operator though? Embeds with images, titles and intros drive more engagement.

                                          1. 8

                                            They’re nicer for me as a user, too. Seeing the real page title gives me more information, and in a much nicer form, than looking at a raw URL.

                                            1. 3

                                              I think that probably depends on the site operator. Probably it’s good for certain customers and sites. I don’t think it’s a universal rule though.

                                        2. 2

                                          Wouldn’t it be possible to use a lower-quality image when serving to Mastodon?

                                        3. 4

                                          That’s not the important part of that test. This is the important numbers from the test: traffic amplification of 36704:1.

                                          1. 10

                                            This isn’t a traffic amplification attack. That is when the traffic goes to someone other than who made the request. In general TCP protocols (such as HTTP) are immune to this issue.

                                            For example it looks like this:

                                            1. 10.0.0.1 sends small DNS query to 10.1.0.0 with source address 10.0.0.2
                                            2. 10.1.0.0 sends large DNS response to 10.0.0.2.

                                            In this way 10.0.0.1 has amplified its attack traffic to attack 10.0.0.2.

                                            You could get a similar “amplification” to what is mentioned in this article by downloading an image with a small HTTP request. It is slightly worse because you get various hosts to help out. But it really isn’t that effective because it only works if you can post to accounts where people are following you or where people re-share your post.

                                            1. 7

                                              If it were a practical attack that number would matter. But then we’d also see it happening as an attack, and I’m not aware of “high-reach accounts on Mastodon post links to take sites down” being a thing?

                                              I think the concept of an amplification factor is not very relevant here. One could imagine “make a network of fake accounts following each other and spamming links” as a potential vector, but it’d also likely be caught and cleaned up fairly quickly, and such low traffic levels is just trivial to do in many other ways. Amplification factor is mostly interesting if you can scale it to large-scale attacks, and you can’t do that here.

                                          2. 21

                                            while I understand the surprise, if you don’t want to serve that content, then don’t. Reply e.g. a 429 for those user agents. Mastodon is friendly and sets a sensible agent. Serve the readers you want to and 429 the others.

                                            Oh, and minimize and cache (or generate statically) your content in the first place.

                                            Reach is expensive after all. When posting on your server, you happen to see it, which I consider a good thing.

                                            1. 18

                                              This seems to be a website with mostly static contents, but the errors they complain about (504s) indicate that their backend can’t keep up with their reverse proxy.

                                              It’s probably a case of the backend being a pile of PHP that can serve about 5 rps, and misconfigured caching. If you end up with an article that goes viral you’ll have the load anyways, so better to stop making excuses and just fix your configuration.

                                              1. 1

                                                It seems to be a (huge!) pile of JS. Going by the RSS feed’s <generator> tag, the CMS is something called Ghost 5.81, which is probably this thing. I didn’t bother looking very far but it seems like some over-abstracted monstrosity (but then again, aren’t most CMSes?)

                                                1. 1

                                                  There are add-ons to turn Ghost’s output into a static site.

                                                  1. 1

                                                    That probably takes away a lot of convenience in the editorial workflow, though. Typically there are other severe limitations to bolting a static site generator onto a dynamic CMS.

                                                    1. 2

                                                      Do you have examples?

                                                      Normally I see an SSG as being a final publishing stage. Dink around all you like in the live preview, when you’re done, hit publish and the SSG pulls everything out into a deployable tree.

                                                      1. 1

                                                        Most of the trouble I am thinking of is related to the expectations of existing add-on modules and the like, which typically expect to be able to process requests dynamically.

                                                        Also, often it may not be possible to incrementally update static pages, so you’d have to do an entire conversion run of the entire site, which may take ages because everything’s so dynamic and slow (as that’s the reason you want to use a static site converter).

                                              2. 16

                                                While I personally disagree with the logic that trusting link previews from any other server in any context is a security problem (for instance, you could have a configurable or heuristic-based way to trust link previews from some other servers), I do understand why trusting link previews from all other servers isn’t something Gargron is willing to implement. This gets brought up periodically and I have to say that I think this is less a “DDoS” and more of a warning shot for what unwieldy, super heavy websites will look like to host in an era of re-decentralization. To use the example given:

                                                In total, 114.7 MB of data was requested from my site in just under five minutes

                                                That’s almost exactly 3 Mbps, the minimum upload speed the FCC defines as “broadband.” Most urban and suburban home connections in the USA could serve that load, let alone any competently configured datacenter or cloud setup. 3 Mbps is not a “DDoS”; if your web server collapses under that load, your web server is not fit for purpose.

                                                  1. 18

                                                    Using the term “traffic amplification” here makes it sound like it’s a targetable thing, like NTP or DNS attacks we’ve seen in the past, but that’s not true. We don’t call posting a link on Reddit that then gets hugged to death “traffic amplification”; if your server can’t handle it, and you don’t want to get a new server, make the users wait or use a global rate limit.

                                                1. 15

                                                  Did anyone visit this page without an adblocker? I did, and I couldn’t read the text from the ads present on that page. I wonder if these are also pulled by mastodon servers?

                                                  1. 26

                                                    Same, I’m never visiting this site again. The page was about 75% covered by ads. It’s so bad that this happened: “This ad used too many resources for your device, so Chrome removed it”. I don’t even have an ad blocker installed! Never seen that before.

                                                    1. 21

                                                      This ad used too many resources for your device, so Chrome removed it”

                                                      What the hell! That’s hilarious! I had to look this up because I had no idea that this is a real thing. https://developer.chrome.com/blog/heavy-ad-interventions

                                                      1. 10

                                                        Disabled my adblocker to have a look-see, and was floored by:

                                                        You may click to consent to our and our 1424 partners’ processing as described above

                                                        I’ve seen as high as 800 before, but 1424!?

                                                      2. 8

                                                        I tried it with the adblocker disabled, and it ended up loading about 6 megabytes of stuff (in almost two minutes!) and then printed a big error into the console that ended with “ad-serving disabled in RU”.

                                                        I guess the natural adblocker of my IP range saves me again ;)

                                                        1. 4

                                                          no. They’ll try to find OpenGraph information, maybe some fallback to that from the HTML, fetch a preview image if they can identify one and that’s it, they’re not grabbing random images or execute scripts loading ads etc.

                                                          1. 1

                                                            I have an ad blocker and I was surprised by the number of ads given how rarely they make it through!

                                                          2. 12

                                                            As others have pointed out, the answer here is caching, and it’s not that hard on a site operator.

                                                            But more than that, I like the site operator being able to choose their caching strategy for this data (cloudflare? sure! local varnish? ok! some kind of weird bespoke mmap implemented in bash? you do you!). That’s better than forcing all clients to go through a central service (like Facebook). That’s just shifting all control to the centralized operator for the sake of simplicity. If that’s the sort of site you want to run, maybe you shouldn’t run your own hosting infrastructure.

                                                            Now, thundering herds are bad in general and I would support some jitter in the mastodon link fetcher, and hell shared previews aren’t bad if you’re already passing metadata around the federation. All of that makes sense. But its absence does not, I think, warrant this kind of response. The pull quote at the bottom of the article is:

                                                            The decentralized social media idea should fix things on the web, and not break the traditional web experience.

                                                            This is the traditional web experience. You know, with web servers, serving web pages. Not just feeding content to a small number of large aggregators who spit it into a feed.

                                                            1. 7

                                                              I would support some jitter in the mastodon link fetcher

                                                              Mastodon already has random 60s delay: https://github.com/mastodon/mastodon/issues/4486#issuecomment-1433573505

                                                            2. 10

                                                              I’ve had this problem affect my blog, which runs on a $7/month Heroku dyno.

                                                              I addressed it with Cloudflare, but it’s easy to underestimate quite how much traffic Mastodon can product for those link previews - this isn’t a trivial problem.

                                                              Here’s my issue from last time I dealt with this: https://github.com/simonw/simonwillisonblog/issues/415

                                                              1. 2

                                                                For comparison and a possible upper bound, I don’t see this on my $24/mo DreamCompute instance running PeerTube, and I have about the minimum popularity possible while still getting hits for these link previews. I also haven’t seen this on my $35/mo classic Dreamhost instance, which hosts a very popular webcomic that gets posted to Reddit regularly. Note though that I chose Dreamhost specifically because of cheap/free bandwidth on their hosting plans.

                                                                1. 2

                                                                  I have 20,000 Mastodon followers which is likely a big factor in this - I’ll get an incoming hit from every server with at least one of my followers on it, which is likely in the hundreds or low thousands.

                                                              2. 19

                                                                Next week: Please Don‘t Share Our Links on Lobsters/HN

                                                                1. 8

                                                                  Is Mastodon this issue or is the concept of link previews?

                                                                  1. 8

                                                                    The issue is they failed to configure proper cache-control headers in their CMS and failed to configure their CDN to cache HTML.

                                                                    1. 7

                                                                      The issue is that the vast majority of social platforms will make 1 request per post to calculate a link preview (if that - not sure off hand if they cache link preview between posts). Mastodon is likely to make dozens in a short timespan, as every instance generates its own link preview.

                                                                      1. 13

                                                                        But in what world is “dozens” a lot? They’re putting something on the internet for people to read. It has to be read. If you want less than “dozens” of people who are at least somewhat interested to read something, then why post it publicly at all?

                                                                        1. 2

                                                                          In the experiment referenced in the article it’s tens of thousands, not dozens.

                                                                          1. 6

                                                                            no, it’s not. the “amplification factor” given is not the number of requests. To get to tens of thousands you’d also need a post that spreads immediately to a vast majority of all existing Mastodon instances (probably some non-Mastodon fediverse software does the same thing, but the overall number is afaik still dominated by Mastodon)

                                                                            1. 2

                                                                              Ah, my bad. It looks like the “traffic amplification of 36704:1” came from 1147 requests.

                                                                    2. 8

                                                                      If you’re hosting Mastodon with popular users, putting an HTTP cache in front of it with a modest (e.g. 10 minutes) cache lifetime makes a huge difference in performance. The thundering herd is real.

                                                                      1. 5

                                                                        Users can also apply https://jort.link/ for this purpose, if their admins have not.

                                                                        1. 2

                                                                          I would rather not modify links and trust an extra service to work around something that should be handled my mastodon. (Even if it just redirected the link preview via this service)

                                                                          1. 1

                                                                            I did not mean to suggest that this was a replacement for link preview sharing (though there are problems with link preview sharing; it’s not as if they just don’t want to implement it for no reason). However, it is a useful workaround.

                                                                      2. 8

                                                                        This website, which basically serves static mostly-text content, is 2.4 MB

                                                                        1. 5

                                                                          This sounds like a caching and webserver performance issue

                                                                          1. 4

                                                                            Aw man, they brought back Slashdotting! :) Having seriously 90s vibes right now.

                                                                            1. 2

                                                                              So presumably this also applies to any other systems that use the Mastodon protocol and code as their backend.

                                                                              Like “Truth Social”.

                                                                              1. 3

                                                                                “Truth” social doesn’t federate so it is not affected.

                                                                              2. 2

                                                                                While I do agree that the problems this website is facing can probably be solved in the short-term, I think this might be a symptom of a larger problem, which could manifest itself in a few different ways. For example, if every server is generating its own link previews, then it’s possible the preview could change after being shared. This would basically cause a “split brain” scenario in which some of Mastodon’s users see one version of the preview, and the other group of users see a different version. Basically an accidental A/B test. :)

                                                                                It sounds like one could exploit this by changing the content and metadata of your shared page after Mastodon users make it go viral, if you were fast enough. Could one create a page that gets shared by a lot of Mastodon users, then “bait-and-switch” to make it look like they shared a completely different thing?

                                                                                1. 1

                                                                                  Perhaps a little harsh, but to be honest, I stopped reading at this point:

                                                                                  [… the platform is not perfect.] Nothing is, unless you are arrogant enough to think of it that way.

                                                                                  The irony is sharp.

                                                                                  1. 1

                                                                                    OpenGraph metadata doesn’t change that often so it seems silly that all these instances need to independently request that data for their own previews.

                                                                                    What if the metadata from links was federated too? That way it’s treated the same way as content from posts is, shared among the network.

                                                                                    I’m probably massively oversimplifying it and I’m not really sure how federation works, but it seems like Mastodon could solve this by just federating the preview data too.

                                                                                    1. 8

                                                                                      Then someone has a bit of fun by making their instance publish fake preview data and everyone cries how Mastodon could be so irresponsible and allow fake headlines being injected into previews.

                                                                                      1. 1

                                                                                        ah true, well maybe someone smarter than me should figure this out - or maybe we don’t even need previews…?

                                                                                        1. 1

                                                                                          A lot of websites that show little embeds for external links use OpenGraph metadata, so Twitter, Discord, etc. are equally at risk.

                                                                                          1. 4

                                                                                            The proposal I responded to was to not use that, but trust what another instance says about it.

                                                                                            1. 2

                                                                                              Oh, my bad! Totally misread it.

                                                                                            2. 2

                                                                                              twitter is more at risk as they do stupid things on link previews

                                                                                        2. 1

                                                                                          Maybe HTML can add a page preview image similar to favicons. Let the server render the preview image once and share it.

                                                                                          In any case, it’s futile to blame Mastodon. Certainly more productive to trim down the site size.

                                                                                          1. 1

                                                                                            itsfoss.com mostly serve static pages (articles, etc…) why wouldn’t a CDN solve their problem?

                                                                                            1. 2

                                                                                              […] why would a CDN solve their problem?

                                                                                              Assuming the emphasized text isn’t a typo, this can happen even when the content is static in nature. In itsfoss’s case, they use a CMS to write, host, and serve their content. As such, they need (and already have) caching in front of it to reduce the load of generating content dynamically on each request.

                                                                                              There are plenty of legitimate reasons to run a site this way, and pretty much every widely read news site uses a CMS+CDN combo. Writers are largely non-technical, at least not in the same sense that Lobsters are; CMSes optimize for their UX.

                                                                                              Many people have suggested a config error may be to blame. itsfoss denies this, but also treats some (methinks) rudimentary suggestions as novel.


                                                                                              For reference, see the comments in the linked article.

                                                                                              1. 1

                                                                                                why wouldn’t

                                                                                                that is right, I noticed I had a typo, I meant “ why wouldn’t “ instead of “why would”