Maybe I might not fully understand the architecture here, but isn’t the 5000% instant scaling happening because Cloudflare’s servers sit in front of the IPFS gateway? The author seems to say that the “spike” resistance is due to IPFS when instead it looks to me it is due to using Cloudflare CDNs. Said in another way, serving the same content using a “regular” web server with nginx would have done the same thing.
It is my understanding as well. In fact, even if the 6000 page views occurred in a time interval of 2 hours, that is still less than a page view per second. Great news for the author, but nothing challenging for the webserver, especially with static content.
IPFS does look cool though and I should probably try it someday.
It would seem like just putting up this static website on GitHub pages would be much simpler than the 5 step process involving GitHub, Azure, and CloudFlare. Otherwise, the smallest public cloud VM with ngnix should be able to handle 5000 views a day, especially with CloudFlare sitting in front.
Or just put it in an S3 bucket and be done with it
I don’t think that’s really the point of the blog post guys.
I really don’t understand the appeal of IPFS in its current state. When all devices and browsers can natively speak IPFS – sure. But right now? It’s just a kludge to store a copy of your data on someone else’s server and you have to force them through something like Cloudflare. It’s not really better; it’s actually more complex and prone to failure.
First of all, there’s a chicken-and-egg problem with your question - browsers etc. won’t support IPFS natively until there’s some critical mass of people using it. So by using it now, you are trying to help building such a future. A.k.a. investing.
Other than that, I believe it might be a nice approach to achieving redundancy of data, also hopefully for backups (esp. on my own hardware - I could have a few servers, each with a copy). That said, personally I don’t use it yet either; tried a few times, but found it too kludgy still.