Just to clarify for those reading, this isn’t new technology in anyway.
It is however, new to Next.js and more generally the React.js ecosystem, which is what’s exciting.
The barrier to entry for building a Next.js app and deploying to a cloud provider like Vercel (I would argue) is much lower than building a server rendered app, setting up a reverse-proxy and cache and then deploying to a server.
This “new approach” is far easier to scale and deploy.
Really appreciate the discussion though :)
Isn’t that just what any reverse proxy, like Varnish does? Along with comprehensive cache header & invalidation? Am I missing something?
The draw is that it works with single page applications.
Varnish would cache your empty template, then the browser would load in all the JS & render the site.
From what I understand of next.js, your single page app is essentially pre-rendered & cached server side so the client gets content immediately, then your SPA takes over from there.
Yeah, that’s a good way of putting it. I suppose with static generation we can deploy a warmed cache version of our site without the server having to do the initial render.
It’s the “little things” I suppose :)
In effect, yes.
Perhaps the “point of innovation” here however is that this new feature promotes a “static-first” approach to building Next apps. In effect, our entire app could just be some static HTML/CSS/JS files. The server (or serverless functions) only come into play when a page needs to be regenerated (or cache invalidated).
Same results but much easier implementation. Varnish setup is a pain.
I’m not sure it’s exactly the same a “pre-warmed cache”. Surely that would just be static generation like we’re all used to?
The magic is in the static-first approach, and simple implementation of on-demand server-side rendering.
(Obviously, we JS folks love hyping new releases!)
I think the distinction for “pre-warmed cache” is “generate a page when it changes and serve it on all requests”, vs “generate a page on-demand and save the generated version for future requests”. Time/space tradeoff: you cache pages that may never be requested, vs. the first person to request it having to wait for it.
isn’t the latter “dynamic programming”?
I think you are thinking of memoization.
I was, yeah. Thanks for that.
No. A lot of server-side stuff never bothers saving generated web pages, on the assumption that they will always be different each time they are fetched. For many types of sites it’s a pretty safe assumption, for example, if you assume that each time someone goes to lobste.rs the state of the front page has changed and needs to be regenerated regardless of who that person is.
Sounds like something we’ve been doing on the CHICKEN wiki for ages; qwiki (and svnwiki before it) will generate a html file for the wiki page you’re looking at. If it is edited, the file is deleted and will be auto-recreated on the next request that needs it. Salmonella, our testing infrastructure, has a plugin which does the same for HTML reports of test runs.
That sounds awesome. I’ve definitely heard a number of people suggesting that this approach to static-first pages with regeneration happening in the background used to be quite common practise a number of years ago. Seems the JS community is only just rediscovering it. Funny how things seem to come back around!
The gitit wiki software does the exact same thing.