Tl;dr js frameworks are generally good at what they’re designed to be good at, and trade other things off. The main popular js frameworks each cover different usecases. Marko is designed for fast multi page apps (MPAs) and is very good at that at the cost of basically everything else, especially things that go with popularity. There’s also an in depth investigation of what makes MPAs fast or slow and how that plays out in react.
I’m not sure I agree with every point in the article but the broad strokes here are obviously correct: if you’re building an extremely performance sensitive MPA that caters to even the lowest-end Android devices you would be wise to stay away from using React. It’s just the wrong tool for that job. That may change with RSC but no one should bet their business on that just get.
That said, React is absolutely a great tool for the most performance sensitive desktop SPA.
Is it the right tool for an extremely performance sensitive mobile SPA? I don’t know if anyone knows the answer to that.
Is it the right tool for performance sensitive SPAs
Yes
compared with something like svelte or hand rolling your own mvc without a shadow dom?
That’s just not important.
I’m sort of a broken record on this site’s comments section but I’ll keep saying it:
Modern React has proven to be an amazing tool for creating one of the largest and most performance sensitive SPAs on the planet, facebook.com. Despite the out-of-context quote from Dan Abramov in the article, Meta was very happy with React and did not regret rewriting the site using it. React did not fall short, even if it has areas of improvement.
Whether you choose React, Svelte, or whatever you almost certainly should NOT be doing it based on performance, and the performance of your app will be dictated by other unrelated technical decisions you make far, far more than which UI framework you chose.
Whether you choose React, Svelte, or whatever you almost certainly should NOT be doing it based on performance, and the performance of your app will be dictated by other unrelated technical decisions you make far, far more than which UI framework you chose.
I’ve never really been a “frontend guy” and it’s been a few years since I’ve worked with React at all, so I could be totally off-base.
But, hasn’t one of the complaints with React historically been that it’s too easy to re-render part of pages much more than necessary? So, even if “correct” React is going to be performant enough for almost any web site, could it be a valid decision to choose a different framework so that it’s harder to get bad performance?
In my experience yes. React is not a “pit of success” framework and the typical team is going to end up with an app that is not performant.
The re-renders are just one of many problems caused by the component tree design. Child components end up being impacted by all their ancestor components in ways that a non React app would not be.
The components have to load and be executed top down. But on most pages the most important thing is in the very middle. If you fetch data higher in the tree (i.e. a username in the top bar) you need to start the fetch but let rendering continue or it will cause a waterfall. But even if you do that correctly, all of the JS still has to be parsed and executed. You can server side render but by hydration is top down so the content is still not prioritized for interactivity. The way providers are used in React coupled with the routers means a ton of code can easily end up blocking the render that isn’t even needed for the current page.
Concurrent mode, suspense, islands, signals, server components, etc. are all attempts to solve parts of these problems but they are incredibly complex solutions to what seems like a core impedence mismatch.
No, that wouldn’t be a valid decision, because render performance is just not likely to be a performance bottleneck for your app.
Again, react is good enough for one of the most complex web apps in the world used by hundreds of millions of people every day.
If you have a reason to think your app is more performance sensitive than facebook.com, or has an entirely different performance story that react doesn’t serve well (such as the article we’re commenting on), then maybe.
For everyone else: re-render performance will not be your bottleneck.
I think React is fine for a lot of situations, and there are considerations other than performance that matter to people, but also for a small SPA, the sheer size of React adds a performance penalty that you’re not going to make up.
For example, I have a small, stupid password picker SPA at https://randpwd.netlify.app. (Code at https://github.com/carlmjohnson/randpwd.) It’s a self-contained page with all the SVGs, CSS, and JS inlined, and it comes it at 243 KB uncompressed. Webpagetest.org says loads in 1.5 seconds on 4G mobile.
You just cannot get that kind of performance with React. Which is fine, because that app is trivial, and a medium to large React app will make it up elsewhere, but I think a lot of people are making what are actually very small SPAs once you look past the framework, and losing a lot of performance because they don’t realize the tradeoff they’re making.
For starters, react is only adding 40kb of gzipped download to your bundle. I don’t care what it is uncompressed. Even on 4g that isn’t a huge deal.
Second, if you’re worried about performance you obviously are using ssr, so react does not block delivering and rendering your app, 4g or not. It only blocks interactivity.
If react is a non-trivial percent of your bundle and you’re trying to optimize the shit out of time-to-interactive for some reason, then yeah maybe it’s an issue. There are reasons I would sometimes not use react on mobile but payload size isn’t at the top of the list.
react is only adding 40kb of gzipped download to your bundle
The page I’m talking about, which includes various word lists, is only 95 KB gzipped, so that would be pretty heavy penalty by percentage. Also 40 KB of JS is much “heavier” than 40 KB of JPEG, since it needs to be parsed and executed. The waterfall for the page shows 1.2s spent waiting on the download, and then .3s spent on executing and rendering everything else. My guess without doing the experiment is that a Next.js equivalent would take at least 2s to render an equivalent page.
you’re trying to optimize the shit out of time-to-interactive for some reason
Well yeah. This is the argument. The argument is that React makes it hard to optimize the shit out of TTI because you start out in a hole relative to other solutions. :-) A lot of time you don’t actually care about optimizing the shit of things, but when you do, you do.
Not really? Have you optimized a site with webpagetest before? There are lots of factors that influence how quickly a site makes it to various benchmarks like TTI, LCP, on load, etc. It’s a whole thing, but the basic thing is to arrange the waterfall so as few things as possible are blocked and as many as possible are happening concurrently, but there’s still a lot of room for experimenting to make different trade offs. Even with SSR, you can screw everything up if you have render blocking JS or CSS or fonts in the wrong place.
Have you optimized a site with webpagetest before?
Sort of? I’ve been doing perf work for decades but mostly in enterprise, so yes on intricate low-level perf work but no to webpagetest.
Even with SSR, you can screw everything up if you have render blocking JS or CSS or fonts in the wrong place.
Sure but that has nothing to do with React. As in, React neither makes that problem easier nor harder.
We probably agree but don’t realize it. I’m just trying to make it so that someone following along in this comments section can understand the correct take-away here[^1], which is that React does NOT put you in a hole that’s hard to climb out of if you’re building an SPA, not even when catering to mobile users.
[^1]: because a lot of real harm is being done by write-ups that make it sound like using React is a terrible performance decision, as you can see from even just the other comments on this article.
I’m not sure I’d call Facebook super performance sensitive on the front-end, compared to, say, a WebGL game or a spreadsheet. At least, that was my impression as a production engineer there.
Sure, there’s definitely a class of application that is on a whole other plane of performance optimization that doesn’t even relate to DOM-based web APIs, like games, spreadsheets, code editors (e.g. Monaco Editor), design tools (e.g. Figma), etc. That’s a different kind of “performance sensitive.”
When I say “performance sensitive” I mean “there’s a lot of money on the line for performance and small gains can be measured and have high monetary reward, and dozens of brilliant engineers are being paid to worry about that.” I don’t mean the app itself is hard to make performant. facebook.com is actually VERY not sensitive to performance regressions in that sense: people want their dopamine squirt and are willing to put up with a decent bit to get it.
Tl;dr js frameworks are generally good at what they’re designed to be good at, and trade other things off. The main popular js frameworks each cover different usecases. Marko is designed for fast multi page apps (MPAs) and is very good at that at the cost of basically everything else, especially things that go with popularity. There’s also an in depth investigation of what makes MPAs fast or slow and how that plays out in react.
Totally.
I’m not sure I agree with every point in the article but the broad strokes here are obviously correct: if you’re building an extremely performance sensitive MPA that caters to even the lowest-end Android devices you would be wise to stay away from using React. It’s just the wrong tool for that job. That may change with RSC but no one should bet their business on that just get.
That said, React is absolutely a great tool for the most performance sensitive desktop SPA.
Is it the right tool for an extremely performance sensitive mobile SPA? I don’t know if anyone knows the answer to that.
Is it the right tool for performance sensitive SPAs compared with something like svelte or hand rolling your own mvc without a shadow dom?
React’s main purpose from what I’ve been able to figure out is to keep teams from clobbered each others’ render loops.
Yes
That’s just not important.
I’m sort of a broken record on this site’s comments section but I’ll keep saying it:
Modern React has proven to be an amazing tool for creating one of the largest and most performance sensitive SPAs on the planet, facebook.com. Despite the out-of-context quote from Dan Abramov in the article, Meta was very happy with React and did not regret rewriting the site using it. React did not fall short, even if it has areas of improvement.
Whether you choose React, Svelte, or whatever you almost certainly should NOT be doing it based on performance, and the performance of your app will be dictated by other unrelated technical decisions you make far, far more than which UI framework you chose.
I’ve never really been a “frontend guy” and it’s been a few years since I’ve worked with React at all, so I could be totally off-base.
But, hasn’t one of the complaints with React historically been that it’s too easy to re-render part of pages much more than necessary? So, even if “correct” React is going to be performant enough for almost any web site, could it be a valid decision to choose a different framework so that it’s harder to get bad performance?
In my experience yes. React is not a “pit of success” framework and the typical team is going to end up with an app that is not performant.
The re-renders are just one of many problems caused by the component tree design. Child components end up being impacted by all their ancestor components in ways that a non React app would not be.
The components have to load and be executed top down. But on most pages the most important thing is in the very middle. If you fetch data higher in the tree (i.e. a username in the top bar) you need to start the fetch but let rendering continue or it will cause a waterfall. But even if you do that correctly, all of the JS still has to be parsed and executed. You can server side render but by hydration is top down so the content is still not prioritized for interactivity. The way providers are used in React coupled with the routers means a ton of code can easily end up blocking the render that isn’t even needed for the current page.
Concurrent mode, suspense, islands, signals, server components, etc. are all attempts to solve parts of these problems but they are incredibly complex solutions to what seems like a core impedence mismatch.
Yes, it’s designed primarily around functional programming concepts not optimal rendering strategies.
No, that wouldn’t be a valid decision, because render performance is just not likely to be a performance bottleneck for your app.
Again, react is good enough for one of the most complex web apps in the world used by hundreds of millions of people every day.
If you have a reason to think your app is more performance sensitive than facebook.com, or has an entirely different performance story that react doesn’t serve well (such as the article we’re commenting on), then maybe.
For everyone else: re-render performance will not be your bottleneck.
I think React is fine for a lot of situations, and there are considerations other than performance that matter to people, but also for a small SPA, the sheer size of React adds a performance penalty that you’re not going to make up.
For example, I have a small, stupid password picker SPA at https://randpwd.netlify.app. (Code at https://github.com/carlmjohnson/randpwd.) It’s a self-contained page with all the SVGs, CSS, and JS inlined, and it comes it at 243 KB uncompressed. Webpagetest.org says loads in 1.5 seconds on 4G mobile.
You just cannot get that kind of performance with React. Which is fine, because that app is trivial, and a medium to large React app will make it up elsewhere, but I think a lot of people are making what are actually very small SPAs once you look past the framework, and losing a lot of performance because they don’t realize the tradeoff they’re making.
For starters, react is only adding 40kb of gzipped download to your bundle. I don’t care what it is uncompressed. Even on 4g that isn’t a huge deal.
Second, if you’re worried about performance you obviously are using ssr, so react does not block delivering and rendering your app, 4g or not. It only blocks interactivity.
If react is a non-trivial percent of your bundle and you’re trying to optimize the shit out of time-to-interactive for some reason, then yeah maybe it’s an issue. There are reasons I would sometimes not use react on mobile but payload size isn’t at the top of the list.
The page I’m talking about, which includes various word lists, is only 95 KB gzipped, so that would be pretty heavy penalty by percentage. Also 40 KB of JS is much “heavier” than 40 KB of JPEG, since it needs to be parsed and executed. The waterfall for the page shows 1.2s spent waiting on the download, and then .3s spent on executing and rendering everything else. My guess without doing the experiment is that a Next.js equivalent would take at least 2s to render an equivalent page.
Well yeah. This is the argument. The argument is that React makes it hard to optimize the shit out of TTI because you start out in a hole relative to other solutions. :-) A lot of time you don’t actually care about optimizing the shit of things, but when you do, you do.
You say “wait on download” but there’s no waiting. The user gets the ssr payload as fast as it can download and render it, right?
Not really? Have you optimized a site with webpagetest before? There are lots of factors that influence how quickly a site makes it to various benchmarks like TTI, LCP, on load, etc. It’s a whole thing, but the basic thing is to arrange the waterfall so as few things as possible are blocked and as many as possible are happening concurrently, but there’s still a lot of room for experimenting to make different trade offs. Even with SSR, you can screw everything up if you have render blocking JS or CSS or fonts in the wrong place.
Sort of? I’ve been doing perf work for decades but mostly in enterprise, so yes on intricate low-level perf work but no to webpagetest.
Sure but that has nothing to do with React. As in, React neither makes that problem easier nor harder.
We probably agree but don’t realize it. I’m just trying to make it so that someone following along in this comments section can understand the correct take-away here[^1], which is that React does NOT put you in a hole that’s hard to climb out of if you’re building an SPA, not even when catering to mobile users.
[^1]: because a lot of real harm is being done by write-ups that make it sound like using React is a terrible performance decision, as you can see from even just the other comments on this article.
I’m not sure I’d call Facebook super performance sensitive on the front-end, compared to, say, a WebGL game or a spreadsheet. At least, that was my impression as a production engineer there.
Sure, there’s definitely a class of application that is on a whole other plane of performance optimization that doesn’t even relate to DOM-based web APIs, like games, spreadsheets, code editors (e.g. Monaco Editor), design tools (e.g. Figma), etc. That’s a different kind of “performance sensitive.”
When I say “performance sensitive” I mean “there’s a lot of money on the line for performance and small gains can be measured and have high monetary reward, and dozens of brilliant engineers are being paid to worry about that.” I don’t mean the app itself is hard to make performant. facebook.com is actually VERY not sensitive to performance regressions in that sense: people want their dopamine squirt and are willing to put up with a decent bit to get it.