Tl;dr js frameworks are generally good at what they’re designed to be good at, and trade other things off. The main popular js frameworks each cover different usecases. Marko is designed for fast multi page apps (MPAs) and is very good at that at the cost of basically everything else, especially things that go with popularity. There’s also an in depth investigation of what makes MPAs fast or slow and how that plays out in react.
I’m not sure I agree with every point in the article but the broad strokes here are obviously correct: if you’re building an extremely performance sensitive MPA that caters to even the lowest-end Android devices you would be wise to stay away from using React. It’s just the wrong tool for that job. That may change with RSC but no one should bet their business on that just get.
That said, React is absolutely a great tool for the most performance sensitive desktop SPA.
Is it the right tool for an extremely performance sensitive mobile SPA? I don’t know if anyone knows the answer to that.
Is it the right tool for performance sensitive SPAs
Yes
compared with something like svelte or hand rolling your own mvc without a shadow dom?
That’s just not important.
I’m sort of a broken record on this site’s comments section but I’ll keep saying it:
Modern React has proven to be an amazing tool for creating one of the largest and most performance sensitive SPAs on the planet, facebook.com. Despite the out-of-context quote from Dan Abramov in the article, Meta was very happy with React and did not regret rewriting the site using it. React did not fall short, even if it has areas of improvement.
Whether you choose React, Svelte, or whatever you almost certainly should NOT be doing it based on performance, and the performance of your app will be dictated by other unrelated technical decisions you make far, far more than which UI framework you chose.
Whether you choose React, Svelte, or whatever you almost certainly should NOT be doing it based on performance, and the performance of your app will be dictated by other unrelated technical decisions you make far, far more than which UI framework you chose.
I’ve never really been a “frontend guy” and it’s been a few years since I’ve worked with React at all, so I could be totally off-base.
But, hasn’t one of the complaints with React historically been that it’s too easy to re-render part of pages much more than necessary? So, even if “correct” React is going to be performant enough for almost any web site, could it be a valid decision to choose a different framework so that it’s harder to get bad performance?
In my experience yes. React is not a “pit of success” framework and the typical team is going to end up with an app that is not performant.
The re-renders are just one of many problems caused by the component tree design. Child components end up being impacted by all their ancestor components in ways that a non React app would not be.
The components have to load and be executed top down. But on most pages the most important thing is in the very middle. If you fetch data higher in the tree (i.e. a username in the top bar) you need to start the fetch but let rendering continue or it will cause a waterfall. But even if you do that correctly, all of the JS still has to be parsed and executed. You can server side render but by hydration is top down so the content is still not prioritized for interactivity. The way providers are used in React coupled with the routers means a ton of code can easily end up blocking the render that isn’t even needed for the current page.
Concurrent mode, suspense, islands, signals, server components, etc. are all attempts to solve parts of these problems but they are incredibly complex solutions to what seems like a core impedence mismatch.
No, that wouldn’t be a valid decision, because render performance is just not likely to be a performance bottleneck for your app.
Again, react is good enough for one of the most complex web apps in the world used by hundreds of millions of people every day.
If you have a reason to think your app is more performance sensitive than facebook.com, or has an entirely different performance story that react doesn’t serve well (such as the article we’re commenting on), then maybe.
For everyone else: re-render performance will not be your bottleneck.
I think React is fine for a lot of situations, and there are considerations other than performance that matter to people, but also for a small SPA, the sheer size of React adds a performance penalty that you’re not going to make up.
For example, I have a small, stupid password picker SPA at https://randpwd.netlify.app. (Code at https://github.com/carlmjohnson/randpwd.) It’s a self-contained page with all the SVGs, CSS, and JS inlined, and it comes it at 243 KB uncompressed. Webpagetest.org says loads in 1.5 seconds on 4G mobile.
You just cannot get that kind of performance with React. Which is fine, because that app is trivial, and a medium to large React app will make it up elsewhere, but I think a lot of people are making what are actually very small SPAs once you look past the framework, and losing a lot of performance because they don’t realize the tradeoff they’re making.
For starters, react is only adding 40kb of gzipped download to your bundle. I don’t care what it is uncompressed. Even on 4g that isn’t a huge deal.
Second, if you’re worried about performance you obviously are using ssr, so react does not block delivering and rendering your app, 4g or not. It only blocks interactivity.
If react is a non-trivial percent of your bundle and you’re trying to optimize the shit out of time-to-interactive for some reason, then yeah maybe it’s an issue. There are reasons I would sometimes not use react on mobile but payload size isn’t at the top of the list.
react is only adding 40kb of gzipped download to your bundle
The page I’m talking about, which includes various word lists, is only 95 KB gzipped, so that would be pretty heavy penalty by percentage. Also 40 KB of JS is much “heavier” than 40 KB of JPEG, since it needs to be parsed and executed. The waterfall for the page shows 1.2s spent waiting on the download, and then .3s spent on executing and rendering everything else. My guess without doing the experiment is that a Next.js equivalent would take at least 2s to render an equivalent page.
you’re trying to optimize the shit out of time-to-interactive for some reason
Well yeah. This is the argument. The argument is that React makes it hard to optimize the shit out of TTI because you start out in a hole relative to other solutions. :-) A lot of time you don’t actually care about optimizing the shit of things, but when you do, you do.
Not really? Have you optimized a site with webpagetest before? There are lots of factors that influence how quickly a site makes it to various benchmarks like TTI, LCP, on load, etc. It’s a whole thing, but the basic thing is to arrange the waterfall so as few things as possible are blocked and as many as possible are happening concurrently, but there’s still a lot of room for experimenting to make different trade offs. Even with SSR, you can screw everything up if you have render blocking JS or CSS or fonts in the wrong place.
Have you optimized a site with webpagetest before?
Sort of? I’ve been doing perf work for decades but mostly in enterprise, so yes on intricate low-level perf work but no to webpagetest.
Even with SSR, you can screw everything up if you have render blocking JS or CSS or fonts in the wrong place.
Sure but that has nothing to do with React. As in, React neither makes that problem easier nor harder.
We probably agree but don’t realize it. I’m just trying to make it so that someone following along in this comments section can understand the correct take-away here[^1], which is that React does NOT put you in a hole that’s hard to climb out of if you’re building an SPA, not even when catering to mobile users.
[^1]: because a lot of real harm is being done by write-ups that make it sound like using React is a terrible performance decision, as you can see from even just the other comments on this article.
I’m not sure I’d call Facebook super performance sensitive on the front-end, compared to, say, a WebGL game or a spreadsheet. At least, that was my impression as a production engineer there.
Sure, there’s definitely a class of application that is on a whole other plane of performance optimization that doesn’t even relate to DOM-based web APIs, like games, spreadsheets, code editors (e.g. Monaco Editor), design tools (e.g. Figma), etc. That’s a different kind of “performance sensitive.”
When I say “performance sensitive” I mean “there’s a lot of money on the line for performance and small gains can be measured and have high monetary reward, and dozens of brilliant engineers are being paid to worry about that.” I don’t mean the app itself is hard to make performant. facebook.com is actually VERY not sensitive to performance regressions in that sense: people want their dopamine squirt and are willing to put up with a decent bit to get it.
I’ll admit that what annoys me the most about the Javascript Ecosystem isn’t any particular individual tool. It’s that:
There are so dang many of them.
They change so much that coming to a project after a 3 month break means you probably have several days of debugging tool issues before you can get started.
Nue doesn’t address proglem #1. Does it address problem #2 enough to warrant violating problem #1?
Note: I’m not being critical of you working on this. Building your own tools is fun I do it all the time. It’s more to inform whether I should be interested in taking a look at it or not.
Is 2 still true? It definitely has been a huge problem historically, but I feel like things have gotten better and I’m not fighting my tools quite as much anymore.
That said, Vue 2 to 3 was a huge PITA with a ton of churn. I had to drop vue-cli and switch to Vite just to get a decent build story as part of the upgrade path. I do feel like people should tell the JS world that ideally the number of times to bump the semver major version is zero, whereas in the JS ecosystem it’s expected that you break compatibility once every year or two for some reason.
nvm or Volta has been used to specify Node versions for every project I’ve either started or joined in at least the last five years. Before that, I saw people lose days battling mysterious build errors only to find they weren’t using the magically correct version of Node. I wish a version manager and integration with the engines field (which really ought be created by default for apps) was built into Node.
Go has added that for the most recent version, and it makes a lot of sense to me. It’s sort of odd how it’s been outsourced to random other tools like NVM and whatnot.
The forward compat issues usually arise around modules with binary deps where the locked version doesn’t build against a newer node version, or more rarely, it doesn’t provide binary versions and the source install doesn’t work on newer versions of Xcode on Mac due to e.g. clang setting more warnings as errors.
#2 relates to project maturity. Nue is definitely not mature and 3 month delay would probably break things (yet to see). React and Vue are obviously more stable.
I hear you. JavaScript in particular suffers from package fatigue. For the answers:
It’s impossible to address the #1 problem with a new project. This is a chicken-egg problem. Something needs to be done to fix the bloated, complex situation of the frontend developer. It has to be something very different, but not a new project. I don’t really know how to address #1 without starting from scratch.
Not possible to address problem #2 without violating #1. Chicken-egg speaking here as well.
As someone emotionally invested in the web, and who learned a lot from and liked Crockford in my early education, who thinks Svelte and TypeScript might be my BFFs, this doesn’t land. But technically maybe, because TypeScript has mainstream supplanted JS in dev polls, so I’m crossing my fingers for something like an optimized subset of TS or some AI-rewrite-it-in-Rust/WASM future or something else that doesn’t challenge my technology choices, which are working great for UX and DX.
This is a really silly conversation and I’m sad to see it keep coming up over and over again. The people building huge, performant web apps with hundreds of millions of users aren’t having this conversation.
The things that are slow about React and JavaScript applications are not bottlenecks in React or JavaScript.
I’ve done a lot of web performance work and it’s always the same shit on the UI side:
No preloading: no data is requested until the page loads. Maybe there’s some basic preloading but it’s not based on static analysis and it sucks.
No SSR: way too much JS is sent down. Most of it not needed for page load.
No data request batching: lots of UI loading triggers lots of data requests.
Data over-fetching is rampant. GraphQL is improperly used.
Waterfalls: UI fetches data which reveals more dependencies which reveals more dependencies.
The SPA is really lots of little SPAs stitched together. The page reloads too often setting perf back.
Lack of telemetry: it’s trivial to regress performance in ways no one notices if you’re not measuring it.
etc
React doesn’t cause (or even encourage) any of this. The ecosystem and engineering happening around React are often garbage (at least in open source) and that’s the only reason these problems exist.
Look at how facebook.com, rebuilt in 2020 to be a modern SPA, works:
As soon as the document HTTP GET request comes in for facebook.com
the backend knows all data that will be requested. No JS has to be parsed by the client before data processing and database requests starts. All data is preemptively sent down.
when some user interaction results in more UI being loaded, UI with data requirements, that data can be sent down along with the JS, not in response to the JS loading. And there’s nothing stopping that fetch from starting preemptively, e.g. if a user mouses over a button the server can send down all the JS and data needed to render the UI when the user clicks the button.
most of the page is server-side rendered and hydrated on the client
GraphQL requests can always be batched. No matter what UI is going to show up, it only needs one query.
Relay + colocating fragments defends against data over-fetching
There are no waterfalls. All JS and data dependencies are known up front and batched together. There is no dynamic, runtime discovery of dependencies.
It’s a true SPA. The page doesn’t need to reload.
The telemetry means that if significant performance regressions happen anywhere, someone will know about it. Usually before the code ships to production.
React doesn’t get in the way of doing that kind of good engineering. In Meta’s case, React made all the good behavior above easier to accomplish. If you do UI work and you use React, don’t worry: you haven’t been bamboozled. If your SPA performance sucks it’s not cause you’re using React or because you built a SPA.
One of Russell’s main claims is that React is being served to millions of users with low-powered devices for websites that are neither huge nor well-engineered. He talked about how Facebook has the resources to do it well, but most don’t, and there’s the snake oil. I think the case can get overstated, and that’s what this article addresses, but this sounds dismissive of a vast number of the world’s mobile users, and sounds like a palliative to the people building websites that uncritically and unnecessarily import hundreds of kbs of JS, which is very slow to parse on these devices.
If an engineering team, with any amount of resources, doesn’t care about performance, then no technology stack will help. The performance of their app will suck.
There are tools to drastically improve the performance of applications that no one uses. For example, Meta open sources Relay which is an absolute gem and no one uses it. Meta publishes specs and gives conference talks about GraphQL best practices. A lot of the engineering that went into the facebook.com rewrite has been open sourced and discussed publicly.
Snake oil is something that doesn’t work. The modern SPA tech stack is amazing and works. These people aren’t snake oil salesmen: they are philanthropists giving away gold that really works.
If you take most React SPAs and do the 5 most impactful things to improve performance, none of those things will require engineering resources your team doesn’t have and none of them will be switching away from React or SPAs. You just have to prioritize doing it. The economics of performance have to work. Etc. The tech stack isn’t the problem.
If an engineering team, with any amount of resources, doesn’t care about performance, then no technology stack will help. The performance of their app will suck.
Technology stacks can help here. They dictate what is easy and what is hard.
W.r.t. edge caching, you can rarely cache your db “at the edge” (fwiw I hate that term).
W.r.t. being bounded by the slowest response: you can stream JSON data down. At Meta a component that wants a list of things can request to have it streamed in. The http response is flushed as data is generated.
EDIT: I should also add that it’s important you don’t block JS delivery and parsing on data fetching. This is very app specific but if you want a quick primer on best practices google “react render-as-you-fetch.” The tldr is don’t block UI on data fetching and don’t block data fetching on UI.
W.r.t. batching, for example you might have a modal dialogue that is powered by a tree of 60 components and every single one wants to know something about the user. Your GraphQL fragments are collocated with each component and roll up into one query. Fetching and returning the user data once in the query response is much faster than returning the user data 60 times.
Additionally, batching requests allows you to roll them up into compiled and persisted queries. When a person hovers that button and the client queries the server, Relay only needs to send the id of the query to the backend. This helps with reducing bytes over the wire, the client doesn’t need to build a complex query, and your backend GraphQL server doesn’t have to respond to arbitrary queries which can help with ddos. Batching graphql has huge advantages.
We cache GraphQL requests at the edge all of the time. It saves a lot of time and money. It has nothing to do with a database.
you can stream JSON data down
If graphql clients can parse a partial JSON response and hydrate a component, then I agree this solves it. However, I am pretty sure clients like apollo client wait for the entire response before parsing. In this latter case, streaming doesn’t do much if one query is taking significantly longer than the other queries. Maybe you make sure your batched queries have relatively uniform response times. I have had to resolve many issues where this was not the case.
batching requests allows you to roll them up into compiled and persisted queries
You can compile and persist queries without batching too. If you do it with batching, you had better make sure your batches are consistent. I have seen architectures batch on demand which actually prevents the compile + persist benefits.
fwiw Relay actually also supports inverting things and you can defer a portion of the graphql response, making it asynchronously sent down later. Useful with React Suspense.
But we’re really in the weeds of specific use cases. The DEFAULT for anyone that cares about performance should be batched, persisted graphql queries (that start loading in the initial document GET) for anything rendering on page load. Full stop. It’s not an anti-pattern. Anything else is almost always much slower.
It’s funny because as I remember it, React was hyped precisely because it was faster than the alternatives. At the time, that might actually have been true - in frameworks like Marionette.js, Angular and EmberJS, large-scale updates would be very very slow unless somehow manually you would be able to aggregate the updates and calculate what really needed to be re-painted. React promised to solve this problem with its virtual DOM, which meant only things that were actually changed would get repainted, automatically.
Unfortunately, even with React it’s easy to accidentally trigger lots and lots of updates, especially when you use abstractions to help manage state. But it did show the way forward with JSX templating, virtual DOM and the reactive style of programming. Also note that the alternatives like Svelte, Vue and Preact which are mentioned in the original article came out after React.
The original selling point of react was not having a disconnect between the state your code believed the DOM was in, and the state it actually was in. The performance angle was just bragging that they’d managed to get an existing paradigm (call it immediate rather than retained mode for simplicity) working at useable performance atop a retained-mode reality (the DOM) using their v-dom.
Coming from a Rails dev, this sounds like it could have been written about Rails 10 years ago.
Maybe the question is:
What is it exactly that make programmers feel more productive with these technologies and what about this tech makes their perf bad? Is it a fundamental trade off between the two or can the gap be bridged?
The way this is written it sounds like the ability to make lots of updates is a productivity gain but also the main cause of perf issues.
If that’s the case it sounds like the answer won’t be found by incremental updates but by a paradigm shift. Which then begs the question. What does that look like and what’s the least painful way to get there?
Is it a fundamental trade off between the two or can the gap be bridged?
Perhaps it’s because an inventor can only really focus on one, maybe two major breakthroughs at a time. React was focused on the virtual DOM and JSX templating, which were already paradigm shifts from the way things were done before. But React was also relatively low-level (not dealing with how to manage state), so I think the tools that were built on top to make it more productive/palatable are the things that made code so slow.
Only when tools started to be used at a different scale and to make fundamentally different kinds of applications as the original team was working on will you hit the limits. Even if a tool works perfectly to solve a particular problem a particular team was having, if other teams pick it up they will run into such things. Then the question is - is it worthwhile to try to improve the old tool, or build a new tool that integrates the worthy new ideas while focusing on a paradigm shift of its own?
Or maybe, just maybe, both are fine? I mean, to this day, people are still using Rails, and Rails has improved some of its worst parts while incorporating changes from other frameworks that came after. And there are newer frameworks that take things in entirely different directions, even dropping the approach Rails takes in some aspects.
React was focused on the virtual DOM and JSX templating, which were already paradigm shifts from the way things were done before. But React was also relatively low-level (not dealing with how to manage state), so I think the tools that were built on top to make it more productive/palatable are the things that made code so slow.
JSX and vDOM are precisely the things that make React slow. HTML parsing in browsers is insanely fast. Moving it to JS makes it orders of magnitude slower.
I quoted JSX as a paradigm shift relative to the insecurity of string templating (see also this recent post) and the clumsiness of the native APIs (or even jQuery) for building HTML elements programmatically from JS.
The vDOM did make things faster when lots of things were updated - in those cases parsing isn’t the bottleneck, but the repainting of updated elements is. That was the whole point of React.
I wish there were a popular nonprofit alternative I could recommended.
Why is that? SO has made the lives of so many programmers better, myself included. Surely it’s a good thing that the founders are able to profit by it?
It’s important enough that it shouldn’t be in the hands of a couple people. If it’s a company, it can be bought; if it can be bought, it can be destroyed. Look at freenode.
At least the data/content is licensed and made available in such a way to disincentivise any stewards of the site, current or future, from doing too terrible a job of it.
You’re commenting on a free site about a post someone wrote for free about maintaining software for free for nearly a decade and (my understanding of) your takeaway is that only extraction capitalism can possibly create sustained value.
Meanwhile musk is burning for-profit twitter to the ground and I can’t take my kids to Toys R Us anymore because hedge firms exist.
Every town in the Western world has churches which are non-profit entities. Whether you think they’re good or bad, they definitely exist and aren’t explicitly profit seeking. There are lots of ways to make a sustainable endeavor. The for profit corporation is one, but there are others.
I’m aware that it exists, and also that it would be the major exception anybody could mention. Stack overflow is not going to run on the “nag everybody who’s details we have for donations until the heat death of the universe” model.
Is your argument that Stackoverflow won’t or that it’s impossible and nobody should try? Because the OP said alternative, arguing against the latter, and the existence of Wikipedia stands to me as a beacon of what’s possible, inviting us to imagine and build better.
Ahhh, I love learning about graph algorithms. (A fun Discrete Math For Programmers class once suckered me into double-majoring in math, until I found myself in a mind-flaying hellscape of tensors and finite fields and had to retreat to CS.)
I am confused about the use of lazy algorithms here, because they don’t work when a dynamic value subscriber has side effects … and in an app the ultimate purpose of reactivity is to update things onscreen. If a reactive value is driving an HTML element, it has to be updated eagerly.
On the lazy algorithms, I think the integration with a UI library calls .set() (like in the lit integration) so the lazy graph gets driven in an eager context, but only the rendered parts of the state graph will be computed, and unused parts of the state graph remain lazily un-computed.
We have the types Media and MediaElement, which both have id: string properties. These two types were often used together, and it was surprisingly very easy to mistakenly give the media element’s ID when the media’s ID was expected and vice versa.
This is the first time I’ve seen a lawyer argue that a license that differentiates between commercial and non-commercial use is a thing that you can do. The other advice I’ve read suggests that it’s too fraught with corner cases to be allowed. For example:
If I use a music player to listen to music on my headphones while I’m working, is that a commercial use?
If I put ads on my blog, is it a commercial use of the text editor that I use to write the entries or the CMS that I use to host it?
If a company uses the software for a purpose that is not connected to their commercial activities, is that a commercial use?
If I use the software for some free project and someone decides later to give me a donation to support my work, am I now violating the license?
Does a fee-paying university count as a commercial use? What if it’s also a registered charity?
Splitting the world into commercial and non-commercial activities is normally a problem that lawyers say is too hard.
In their experience there weren’t many conflicts over the definition. So I guess (like was recently said about engineering) “commercial use is like pornography, I know it when I see it” – and that’s good enough.
In all honesty, the whole software licensing idea is a bit anoying and not as useful as most people think.
I put the MIT license on code I push to github because they force me to use a license. But in good truth, I have no way to enforce it in most cases. Nor would I care about ofenses in many cases.
I wish software authors would be less possessive about their code and put the focus on the code itself rather than overhead. I miss the days when one would post code online and whomever wanted would do whatever they wanted with it without bringing the boring licensing discussions to attention. Attribution would naturally occur to an acceptable level givena good cominity with rnouth well intended people.
I also don’t quite agree with the concept of paying for a copy of the software and not being able to do whatever one wants with it, within reasonable limits such as non-usurpation.
I understand it is a reality today and perhaps even the most adapted to today’s economy, but it is a practice that should be questionable. Is it really ethically correct? I don’t think so.
For me, licenses are not for the code publishers, but rather the code consumers.
If you publish code without a license, then in my jurisdiction it’s technically copyrighted by default. I’m legally not allowed to use it at all, and open myself up to legal liability if I do. After I make my millions, how do I know you won’t some day take me to court and demand a percentage of that? By putting a license on your code, you’re giving people peace of mind that you’re not gonna turn around and try to sue them later.
Agreed. At work a few years ago I copied-and-pasted a page of useful code from a gist I found on GitHub, including the comment identifying the author, and I added a comment saying where I got it from.
Before our next release, when we had to identify any new open source code we were using, I added a reference to that file. The legal department then became worried that there was no license associated with it. Someone ended up tracking down the author and asking him, and he assured him he had no claim on it and put it in the public domain.
I wish software authors would be less possessive about their code and put the focus on the code itself rather than overhead.
Unfortunately, this attitude only leads to mass exploitation of developers and enrichment of corporate interests.
The world is full of assholes who will take advantage of the free work and good will of others and give nothing back.
The world is also full of useful idiots who will give over their stuff to the aforementioned assholes and then, years later after discovering that you can’t pay rent with Github stars or HN posts, cry and piss and moan about how they were ripped off.
You can’t “exploit” someone by taking [a copy of] what they’re giving away for free. Free means free.
If you create stuff and don’t charge money for it but have the expectation that people will give you money for it anyway or at least recompense you somehow_ … then you are either living in a small traditional village culture, or an anarchist commune. In both of those environments there is such a social contract*. If you’re not, you are indeed an idiot, unless you encumber your software with a license that forces such recompense.
I don’t believe most open source contributors who don’t use copyleft licenses are idiots. I believe they genuinely make their software available for free and don’t expect to see a dime directly from it.
In my case I do so to give back to the world, and because having people use and appreciate what I’ve made makes me feel good, and because it enhances my reputation as a skilled dude to whom my next employer should pay a handsome salary.
* I highly recommend Eric Frank Russells’s 1940s SF story “…And Then There Were None”, about a colony planet that adopts such a society, inspired by Gandhi, and what happens to a militaristic galactic empire starship that rediscovers the planet.
You can’t “exploit” someone by taking [a copy of] what they’re giving away for free.
I would argue that you absolutely can if you take something offered freely, make a profit at it, and do not somehow pay that back to the person who helped you out. It’s somewhat worse for many maintainers because there is active pressure, complaining, and hounding to extract still further value out of them.
I don’t believe most open source contributors who don’t use copyleft licenses are idiots. I believe they genuinely make their software available for free and don’t expect to see a dime directly from it.
I think there is for many of us a belief that we give away our software to help out other developers. I think of neat little hacks I’ve shared specifically so other devs don’t ever have to solve those same problems, because they sucked and because I have myself benefited from the work of other devs. This is I would argue an unspoken social compact that many of us have entered into. That would be the “not directly see a dime” you refer to, I think.
Unfortunately, it is obvious that as a class we are not recouping the amount of value we generate. It is even more painful because it’s a choice that a lot of developers–especially web developers, for cultural and historical reasons–sleepwalk through.
Consider Catto and Angry birds, right? Dude wrote Box2D (without which you don’t really get Angry Birds as a physics game) and never saw (as reported anyways) a red cent of the 12BUSD in revenue they booked in 2012. That’s insane, right? There’s no world in which that is just.
(One might argue “@friendlysock, ours is not a just world.” In which case, sure, take all you can and give nothing back, but fucking hell I’m not gonna pretend I don’t find it in equal measure sad and offensive.)
Our colleague I’m responding to is exactly that sort of person that a company, investor, or founder loves–yes, yes, please, don’t think too hard about licenses, just put your work in the public domain! Don’t worry your pretty little head about getting compensated for your work, and most certainly don’t worry about the other developers you put out of a job! Code wants to be free, after all, and don’t fret about what happens to development as a career when everything we need to write has either been written or can be spun whole-cloth by a handful of specialists with the aid of GPT descendants!
I suspect our colleague means well, and lord knows I wish I could just focus on solving neat problems with code, but we can ill afford to ignore certain realities about our industry.
I would argue that you absolutely can if you take something offered freely, make a profit at it, and do not somehow pay that back to the person who helped you out.
Nah, I’ve published MIT stuff, and my take is - go for it, commercialize the hell out of it, you don’t have to pay me anything.
The point of MIT is to raise the state of the art, to make the solution to a problem universal. That includes corporations. No reciprocity is required: the code being there to be used is the point of releasing it.
I would argue that you absolutely can [exploit someone] if you take something offered freely, make a profit at it, and do not somehow pay that back to the person who helped you out.
I assume the definition your link refers to is “to make use of selfishly or unethically,” because the others don’t fit. But if someone offers you a thing with explicit assurance that you can use it freely without encumbrance (except maybe a token like thanking them in a readme), and you do so, how is that exploitation?
Feudal lords exploited peasants because the peasants had no choice but to work the lord’s lands for subsistence, or leave and starve. That has nothing to do with open source developers. No one is forced or coerced into releasing code freely.
This is I would argue an unspoken social compact that many of us have entered into.
If that’s the social compact you want, then for gods’ sake choose a license that expresses it. Choose an Old Testament eye-for-an-eye license (GPL) not a New Testament “turn the other cheek” license (MIT et al).
That would be the “not directly see a dime” you refer to, I think. Unfortunately, it is obvious that as a class we are not recouping the amount of value we generate.
Dude, you and I are in the same class. I’m sure we have comparable skill sets. I went to work for The Man after school, and in exchange for writing what The Man wants all day, I make good $$$. I don’t know what you do exactly, but if you aren’t getting paid for code then I guess you’re either working at something you like better and coding as a hobby, or you aren’t tied to the capitalist treadmill at all and get to code whatever you choose all day; I don’t know. But you probably have your compensations.
I do know that it is super unlikely that there is a class of impoverished coders out there unable to find good paying jobs. Tech companies like the one I work for are desperate for talent. In the ten years I’ve been at this job I have witnessed how effin’ hard it is to find good programmers. Most of the ones the recruiters turn up are deeply mediocre, and we give up and hire the best of a mixed bunch. We have gone to great lengths like filing H1-b visas and dealing with six months or more of government bureaucracy hell, just to get one mostly-competent programmer from a distant country. In fact most of the people we hire are from outside the US, because seemingly all the local developers know nothing except React or whatever trendy web junk is popular these days … not the CS fundamentals we need.
In a crazy seller’s-market for coding skills like this, I refuse to listen to Marxist arguments about exploitation of the working classes. That is not the world I have seen in my 40 years in this industry.
I think you overestimate how common those “idiots” are (I disagree that the world is “full” of them as snej explains in the sibling comment), maybe due to the occasional cases that get a lot of attention, and I think you underestimate how a spirit of giving can benefit the commons, for genuine non-financialized benefit to the giver and others. Copyleft hasn’t solved the domination problem, and with AI-(re)written code being a likely dominant future force, I won’t be surprised to see license relevance decline. There’s other approaches to the world’s problems than licenses, and maybe in some cases restrictive licenses trap us in local minima.
I feel similarly, in terms of over-focusing on licenses, and I don’t care what the not-well-intentioned people do with most of the code I put online; not that I would never speak out but life’s too short and I’d rather focus on other ways to convey my values and have a positive impact. (this isn’t a statement against other people using copyleft or non-commercial, I still consider using them in some cases) Two licenses that might fit those goals better than MIT are the public domain Unlicense and WTFPL.
With the future looking like it’ll be full of AI-assisted code trained on every open codebase, we need solutions other than licenses more than ever. “Computer, generate me a program in Zig that passes the LLVM test suite in the style of Fabrice Bellard.”
Problem with some licenses like Unlicense is that not all jurisdictions allows you to voluntarily place your work under public domain, so in such jurisdictions that license is void.
Jim Weirich (author of rake, rest in peace) used the MIT license for most of his work but a few smaller projects used this simple license:
You are granted permission to read, copy, modify, redistribute this software or derivatives of this software.
It’s important to grant at least some license, otherwise (as I understand it) in the US you do not have any rights to make copies of the work unless you are the copyright holder or are granted a license. There is a lot of old software in the world where the author has passed away or otherwise moved on, without ever granting an explicit license, leaving the software to sit unused until the copyright expires.
What happens if you copy paste a 20 line script from a blog and include it in the project of a product you make in the context of a private company of yours which doesn’t publish its code?
It’s not like the open source police will read all your source files and search line by line to try to find it out there on the web. If anything, most companies a ton of low quality code that no one wants to look at.
I think you are making the point that a license does not in practice restrict someone from using your code under terms not granted by the license; I agree.
You wrote that you wished “software authors would be less possessive about their code and put the focus on the code itself rather than overhead”. I also agree with that sentiment, but I do not believe that implies publishing code “without bringing the boring licensing discussions to attention” (which I interpreted as “without a license”) is the best path to putting the focus on the code.
The most common thing that I see is a pair of products. Product Community Edition is MIT or BSD or AGPL or, occasionally, GPL, and comes with a git repo and a mailing list, and a refusal to take patches unless accompanied by an IP transfer. It’s always free.
Product Business Edition or Enterprise Edition is licensed on commercial terms and includes at least one major feature that businesses feel is a must-have checkbox item, and some amount of support.
I used to see a bunch of open source products where the main (usually sole) dev sold a phone app that went with the product, in order to raise some money. That seems less popular these days.
As you and I have discussed here before, it is quite reasonable to talk about Free Software licenses which are effectively non-commercial. The licenses I enumerated at that time are uniform in how they would answer your questions: yes, all of those things are allowed, but some might be unpalatable to employers. Pleasingly, on your third point, a company would be afraid to try to use Free Software provided under these licenses, even for purposes outside their charter of commerce.
I got something slightly different from reading the post; it’s not “you can differentiate between commercial and non-commercial” in a license; it’s “if you want to differentiate between commercial and non commercial then don’t dual-license using the MIT license because that creates ambiguity”.
Just to be pedantic, it doesn’t create ambiguity. MIT pretty much lets anyone use it, where your intention was probably not that. Therefore, the issue isn’t ambiguity, it’s redundancy.
I don’t see why one couldn’t write a software license that differentiates between commercial and non-commercial use, using whatever criteria the license writer wants for edge cases. That will probably end up not being a free software license - a license that attempts to judge what kinds of uses of software count as “commercial” and legally forbid them limits user freedom to use that software in a way incompatible with the notion of free software - and this will affect free software advocates’ willingness to use software licensed under such terms. But there are plenty of non-free software licenses in this world, what’s one more?
we need new regulation to align them toward creating software that better serves our society
As deflating as it might be, I agree with this, with perhaps a more abstract interpretation of “regulation” than many readers. We may jump to thinking about state control and other top-down reactive power, but that’s only one form, and I’m curious about the other possibilities.
For example we can imagine democratic organizations that are sufficiently self-regulating, especially if the demos is expanded to include users, as in platform cooperatives. In this scenario, regulation itself can be more decentralized with thoroughly aligned incentives and ongoing democratic negotiation and accountability, which, as the author says, is politics – not always energizing to would-be creators of these systems! But happily there are plenty of good-hearted wonks and negotiators among us.
I have yet to encounter any GUI API (web or otherwise) that does digital typography decently. Does anyone know of one that, for example, supports the notion of a baseline grid? See also: font size is useless; let’s fix it.
Because digital typography for the masses is a bad idea. The whole concept of showing a paper page on a screen as a canvas (no pun intended) and use typographic elements as your artist brush is intricate per se.
I think the average Joe would be better served with something in the lines of markdown if only it was what they first had exposure to. WYSIWYG editors have this aura of being simple and direct but their complexity explodes in your face after less than a handful of elements.
I’m using “API” as an umbrella term. Over the years I’ve played with a variety of tools, sometimes called “APIs” or “SDKs” or “toolkits” or, in the case of the web, an amalgam of standards… which include APIs. Whatever you call them, I’m thinking of tools developers use to build software applications with graphical user interfaces (GUI). Here are some examples of what I mean:
HyperCard + HyperTalk
Swing
Qt
BabylonJS GUI
And, of course, native web APIs
There are others that I’m curious about but am less familiar with (SwiftUI comes to mind). I’m genuinely curious to know if any of them give developers the means to lay out text using principles that have established in the graphic design world for almost a hundred years now by luminaries such as Robert Bringhurst or Josef Müller-Brockman. All of the tools I’ve used seem to treat typography as an afterthought.
I think that’s overly pessimistic. The specific problem here is trying to embed one docment layout system in another. Few apps need to customize the specifics of eg. text layout to nearly the same extent as Google docs.
And though I empathize with the idea that it needn’t be this way, I haven’t found many better systems for application distribution than the web. Though maybe I really do just need to sell my soul to QT.
Your argument about “I haven’t found many better systems for application distribution than the web” is somewhat defeated by the very nature of web browsers.
Google distributes an application to multiple platforms with regular automated updates. It’s called Chrome. It’s a POS memory hogging privacy abusing whore of satan, but that’s not really related to it being native or not - Google manages to push those qualities into browser based ‘apps’ too.
But instead of knocking a few layers off the stack and starting again from something akin to the webrender part of what would have been Servo, they’re just re-inventing a lower layer on top of the tower of poop that is the DOM.
Linus Torvalds is convinced Rust will take over the Linux Kernel.
Not exactly. From the source article:
However, he said that for things “not very central to the kernel itself”, like drivers, the kernel team is looking at “having interfaces to do those, for example, in Rust… I’m convinced it’s going to happen. It might not be Rust. But it is going to happen that we will have different models for writing these kinds of things, and C won’t be the only one.”
Neat. One thing to note about the <details> tag is that all of its contents will be rendered on the DOM, even if not displayed - which is what you’d expect. But often for various reasons, an implementation will want the contents lazily rendered, only mounted when the details are expanded, and the builtin tag doesn’t offer this.
There’s actually a really great benefit here that isn’t talked about as much as I think it should be, search. Lazy rendering of DOM renders the built in search of the browser kinda useless. The contents of <dialog> being searchable means I can search through menus and more without requiring additional JS search.
Good point, plain text should normally be eagerly rendered because of this - lazy loading is good for heavier resources like images, and sometimes components for behavioral reasons.
I still feel like I’m stuck in mid 201X regarding my frontend skills. To be honest I just don’t want to invest the amount of time it’ll take to get fluent in a “modern” stack, so some basic vue/react with bootstrap it is. And the moment you’ve started doing something for frontend, it’ll already feel like you’re outdated and behind (svelte)..
Dude, don’t worry about it, that stack is 100% fine, and not outdated. Getting sucked into FOMO about JS frameworks and tooling is a total trap. If you’re not a full-time frontend engineer, use whatever gets the job done.
Once you feel like learning something new would be a fun way to spend two weekends, go for it.
Tailwind is awesome for example, but there’s not that much to it. It’s just some nice atomic utility classes, but that means you build all the component styling yourself (buttons, cards, …) instead of using the ready-made bootstrap abstraction.
I agree, it somehow hits exactly the right level of abstraction. For me it nudges me into some possibilities I would never have tried with bootstrap or pure CSS.
For example last year I had some really opinionated vision for a travel planner UI that would have been completely boring and bad with just prefabricated components: https://gllvr.com. I’m sure my implementation is still kind of rubbish for a lot of use cases, but I couldn’t even imagine doing it with bootstrap/bulma, etc.
I’m sure it has more to do with the way my background has warped my mind than with anything inherent to either approach, but I found it easier to build buttons/cards/etc with these utilities than I did to get the ready-made ones to look/work the way I wanted them to.
I would have found it devilishly hard to get that striped component in your planner (where you click on the left side to type in where you’ll be sleeping or click on the right side to copy the previous night’s location) to be anything like what you made in bootstrap. I do suspect there are people out there who wouldn’t find it so, though.
Vue, React, Angular, Svelte, and most frontend frameworks since React and Angular 2, are modern UI component frameworks. Once you understand components deeply, the learning curve for any of the others is much shorter. Svelte may be well designed and easier for beginners than most, but part of why people report picking it up in an afternoon is that they already understand components.
The details differ between frameworks, especially with the direction React has gone with hooks, but despite the proliferation of frameworks in recent years, there’s been a substantial convergence on components since the days of Backbone, Angular 1, jQuery, Knockout, and all of the other not-quite-component frameworks. They may have been fewer in number back then, but the fundamentals varied widely across tools. The situation today is much more approachable despite the memes.
I find react to be quite horrible if you want to use or do stuff that doesn’t exist for it as a lib. (Also don’t get me started on the amount of packages and reported CVEs in a hello world..)
Really? I generally don’t use any React-specific libraries, and React itself I’m sure has few or no dependencies (I use Preact most of the time, so I’m not sure the state of React). Are you talking about create-react-app? I’ve never used it myself, it seems totally unnecessary.
I’ve been using bootstrap for years, and I loved it but some things just didn’t feel quite right.
I’ve recently switched to tailwindcss and it has made me so happy. Doing anything is just better and feels more fun. Also you don’t end up with loads of custom CSS.
If you switch away from bootstrap I can almost guarantee your life will be better :)
That post, plus about 20 minutes with this tutorial persuaded me that I was interested in giving tailwind a real try.
I found that having one workspace with two browsers tiled next to each other, one with my site and one with the tailwind docs, and a second with the code open, made it really fast and enjoyable to try things out. The search on the tailwind documentation is especially good, and the live updates that come with svelte running in dev mode are very snappy.
It’s actually pretty high on my list to dig in and see just how those live updates work. There are a couple of spots in my own code where I could use a similar mechanism to very good effect, assuming it’s not leaning on some heavy mechanism that gets shaken out in the production builds.
I was stuck with jinja + a little jquery for my front end. So state of the art 2008? It was starting to slow my ability to explore some ideas in a way I wanted to share. I don’t think I’d have been motivated to spend 30 hours exploring this for a couple of weeks if I had a good grasp of vue and bootstrap.
The feedback from changing something on the server in dev mode to seeing it in the client is so much faster than it was when I was writing jinja templates and sending them in response to form submissions. That’s one of the major improvements of moving for me, and I think vue/react/bootstrap would’ve gotten me that also.
This stack just lined up with my mental model a little better, which meant a lot as I was trying to learn a pile of “new” all at once. Tailwind’s utility structure combined with the way styles are scoped by default in svelte made it easier for me to think about my UI layout than it ever has been for web stuff.
Has anyone used Svelte and can give a small comparison between this and the other popular frameworks right now? (Vue and React I guess?)
I’m making a small web interface and I think it could use some… interactivity; I tried with React because that seems like the most popular and the best thing to put in my CV but it’s been confusing so far.
Svelte code usually needs fewer characters to accomplish the same thing and its output is typically smaller and faster. (not always, and Vue may win some benchmarks nowadays) On the more subjective side of things, I find there’s less abstraction and a simpler mental model for working with components and the DOM. There’s a lot of power and flexibility behind its apparently simple syntax and features - animations in particular are nice coming from React - but it makes some tradeoffs to prefer statically analyzable constructs over unfettered runtime flexibility. (e.g. things like mapping over and introspecting props and slots are unofficial hacky APIs at the moment) In practice I haven’t been hampered but YMMV. Some open issues might fill in these gaps.
It’s closer than most frameworks to working directly with plain JS/HTML/CSS, and it’s sometimes called an unframework because of how it compiles its components to simple JS with imported helpers, not too different from optimal hand-written vanilla JS, but with modern declarative components. I wrote this fairly comprehensive overview with React and Vue in mind a few months ago.
I agree React and Vue are both better for your CV. Svelte might stand out for some people.
The official tutorial walks you through a lot of the basics. An hour or two with it and the examples should give you a good taste for it.
Oh my goodness. This is not Javascript, it’s Typescript!! With a .js extension!!
Wow, that’s a new level of wtf. I wouldn’t chalk that particular error to Node - mislabeling source files is pretty much an “developer is gravely mistaken” error.
The more I think about it, I wonder if the github author first took a node project skeleton then dumped or rewrote a section in typescript and didn’t update the README.
A mislabeled typescript file put erroneously into a stale project skeleton is hardly a fair experience, except for “how a developer with a stale readme can cause pain”.
My guess is that it was Flow rather than TypeScript. They have very similar syntax for type declarations, but Flow was mostly written with a .js extension, whereas TypeScript will issue a compile error if types are embedded into a .js file using the typical syntax. (You can use TS with types in a .js file, but only in comments.)
Why do the Flow developers think it’s okay to appropriate the extension of a different, closely-related file type?!
Because the ambition was to have Flow as a superset of JavaScript, which would allow teams to gradually introduce static typing to their existing codebase.
I’m not going to pass judgment on whether or not I think this is a good idea. However, this was devised by some of the highest-paid programmers in the world (since they work[ed] at Facebook), which I think validates the points about the JavaScript ecosystem that Lea Verou was making in her article, and the points that I’ve made elsewhere in this thread.
You’re right to be horrified, but it’s not the developer’s fault, I’d bet. The reason it’s .js extension (if it’s TS and not flow) is likely to be the difficulty they had in configuring their build toolchain, which is a constant thorn in the side of all of us JS devs with everything constantly shifting underneath us. And almost every single npm package in existence is filled to the brim with content that has no business being in the built+published artefact. NPM the tool makes publishing the correct content incredibly difficult, if you’re doing any sort of build-time tooling at all.
So AFAICT this is the tradeoff the author consciously rejects, and the one that Svelte consciously chooses:
Choose writing in a JS framework that is compiled ahead of time, over writing in a JS framework that is interpreted at runtime.
The disadvantages of this tradeoff that weigh on the author’s mind:
If your language is compiled, debugging is harder because the compiled code that is run does not resemble the source code you have to fix.
They also make the point that it’s confusing that Svelte code is Javascript, but it needs to be compiled to run it, which may change its behaviour. (To me that’s not that different from frontend framework code that is valid JS/HTML, but needs the framework runtime to run it, which may change its behaviour.)
If in the future more front-end-systems compile to Javascript instead of writing in it, it becomes harder to glue them together.
I think it’s interesting to look how Elm solved these, because like Svelte, it is compiled ahead of time to small and fast JavaScript that doesn’t resemble the source code.
Elm’s solution to ‘you have to choose between debugging the runtime JS or the source code’ is to go all-in on making it easy to debug the source code. In Elm’s case, it is an ML-family language with a type system that guarantees zero runtime errors (but won’t save you from domain mistakes, obv.), and with compilation error messages that are so helpful that they have inspired many other languages.
Svelte, presumably, wants to remain Javascript, so a lot of error prevention becomes harder. They mentioned they want to add Typescript support. Or they could add source maps that relate compiled JS to the original Svelte code? Also, debugging compiled projects is a very old craft, it only really gets onerous if the problem is low-level or compilation is slow.
I also note that Svelte compilation has a ‘dev’ flag that produces named functions, and also extra code that performs runtime checks and provides debugging info.
Elm’s solution to the interoperation problem: an Elm module can expose ports (blog post, docs that external JS can send messages into, or that JS can subscribe to. So the ports form an Elm module’s public API.
That still leaves the interop problem of styling the created components. If it’s you writing the Svelte, you can let Svelte do the styling (if Svelte is the whole system), or specify the right class names on the created DOM (if you’re writing the Svelte component as a subsystem). But if you’re reusing sombody else’s Svelte component, I’m not sure how easy it is to pass in the class names you’d like the component to use. Perhaps ‘support for caller-specified class names’ is even an open problem / blind spot in frontend frameworks in general?
In one sense, all of the hard-fought web knowledge that people may subconsciously pride themselves on knowing now becomes a stumbling block. Technologies like Svelte treat the underlying web runtime as something to be papered over and ignored. Much like compiled C, being able to debug the generated code is a needed skill, but the 1-to-1 correspondence between source code and generated code is not a guarantee, and it can be disconcerting to let go of that.
I’m all for it. We largely ignore x86/x64 by using higher level languages and our code is better for it, even if slightly inefficient.
Web devs love to talk of developer experience and progress in tooling. Something something…Cambrian explosion? ;)
I think the author’s problem isn’t so much with it being compiled, but the fact that the source code looks like JS, but your assumptions don’t hold because there’s a lot happening to that JS so the end result isn’t anything like what you typed.
Reminds me very much of the Duck Test https://en.m.wikipedia.org/wiki/Duck_test. Svelte walks like JS and talks like JS but isn’t JS. This is typically seen as a positive for those who judge their tools, at least partly, based on familiarity.
Yes, I agree. Elm is a language that has its own semantics which are adhered to by its compiler. But Svelte takes the semantics of an existing language (JS) and changes them.
I have that concern about Svelte too, though it’s not strong enough to change the fact that I’m still a fan and excited to see how Svelte evolves.
In practice I’ve found debugging Svelte to be mostly trivial and sometimes difficult. Dev tools will help close the gap but they’re not mature.
For styling, style encapsulation is what I’ve seen the devs recommend, but nothing stops you from passing classes as props to components that accept them. (I do that a lot because I like utility CSS libraries like Tailwind) The biggest open RFC right now is about passing CSS custom properties (CSS vars) to components. https://github.com/sveltejs/rfcs/pull/13
I think the tradeoff here isn’t about source mapping and the sort, but instead that if you take it as given that you’re going to compile your language, then you might as well through more language safety features in (a la Elm).
That might be true, but the other sacrifice is familiarity. Svelte can be learned by a frontend dev very quickly and without much “relearning” fear. Instead, you get the cognitive dissonance problem of it being almost what you expect but, then, not quite.
if you take it as given that you’re going to compile your language, then you might as well through more language safety features in (a la Elm).
There’s a big leap from Svelte to Elm beyond just compiling the language. Elm has tremendous benefits, definitely, but it gives up seamless interop with the DOM, mutable web APIs, JS libraries, and future web standards. Elm has JS interop but only asynchronously through its ports. Purity and soundness are great but at what cost? (that’s a rhetorical holy war question, not worth discussing here IMO!)
I think TypeScript has been gaining so much adoption largely because it makes pragmatic compromises everywhere, not just because people are resistant to learning Elm/Reason/PureScript/Haskell/etc, and when support lands in Svelte I’ll be less shy about recommending it to people.
Yeah, I think for most people in most cases, familiarity and ease are the bigger win. I’m not arguing one should use Elm, just laying out that continuum.
By seamless interop do you mean synchronous? I started trying out Svelte yesterday and found the dom interop to not be seamless as I was confused by the difference between <input value={val} /> and <input on:value={val} />
I think from memory that’s how you get an interactive input.
I meant seamless but that’s overstating it. (except for mutable web APIs and JS libraries - interop there is generally seamless because of how Svelte extends JS) Anything that isn’t plain HTML/CSS/JS is going to have seams. Svelte minimizes them to a degree that some other frameworks don’t, like React and especially Elm. Vue is on similar footing as Svelte.
The nice thing about Svelte’s seams is they often reduce the complexity and verbosity of interacting with the DOM. In your example, it sounds like you want:
<input bind:value={val} />
(or simply <input bind:value /> if that’s the name)
At the same times Svelte gives you the flexibility to optionally use a “controlled input” like React:
Author here, just wanted to make a note. This isn’t written to hype a battle in the holy war. Frontend frameworks are a positive sum game! Svelte has no monopoly on the compiler paradigm either. Just like I think React is worth learning for the mental model it imparts, where UI is a (pure) function of state, I think the frontend framework-as-compiler paradigm is worth understanding. We’re going to see a lot more of it because the tradeoffs are fantastic, to where it’ll be a boring talking point before we know it.
Thanks for this. It’s refreshing to hear a grounded perspective when it comes to frontend technologies. Now I should actually read the original article…
Tl;dr js frameworks are generally good at what they’re designed to be good at, and trade other things off. The main popular js frameworks each cover different usecases. Marko is designed for fast multi page apps (MPAs) and is very good at that at the cost of basically everything else, especially things that go with popularity. There’s also an in depth investigation of what makes MPAs fast or slow and how that plays out in react.
Totally.
I’m not sure I agree with every point in the article but the broad strokes here are obviously correct: if you’re building an extremely performance sensitive MPA that caters to even the lowest-end Android devices you would be wise to stay away from using React. It’s just the wrong tool for that job. That may change with RSC but no one should bet their business on that just get.
That said, React is absolutely a great tool for the most performance sensitive desktop SPA.
Is it the right tool for an extremely performance sensitive mobile SPA? I don’t know if anyone knows the answer to that.
Is it the right tool for performance sensitive SPAs compared with something like svelte or hand rolling your own mvc without a shadow dom?
React’s main purpose from what I’ve been able to figure out is to keep teams from clobbered each others’ render loops.
Yes
That’s just not important.
I’m sort of a broken record on this site’s comments section but I’ll keep saying it:
Modern React has proven to be an amazing tool for creating one of the largest and most performance sensitive SPAs on the planet, facebook.com. Despite the out-of-context quote from Dan Abramov in the article, Meta was very happy with React and did not regret rewriting the site using it. React did not fall short, even if it has areas of improvement.
Whether you choose React, Svelte, or whatever you almost certainly should NOT be doing it based on performance, and the performance of your app will be dictated by other unrelated technical decisions you make far, far more than which UI framework you chose.
I’ve never really been a “frontend guy” and it’s been a few years since I’ve worked with React at all, so I could be totally off-base.
But, hasn’t one of the complaints with React historically been that it’s too easy to re-render part of pages much more than necessary? So, even if “correct” React is going to be performant enough for almost any web site, could it be a valid decision to choose a different framework so that it’s harder to get bad performance?
In my experience yes. React is not a “pit of success” framework and the typical team is going to end up with an app that is not performant.
The re-renders are just one of many problems caused by the component tree design. Child components end up being impacted by all their ancestor components in ways that a non React app would not be.
The components have to load and be executed top down. But on most pages the most important thing is in the very middle. If you fetch data higher in the tree (i.e. a username in the top bar) you need to start the fetch but let rendering continue or it will cause a waterfall. But even if you do that correctly, all of the JS still has to be parsed and executed. You can server side render but by hydration is top down so the content is still not prioritized for interactivity. The way providers are used in React coupled with the routers means a ton of code can easily end up blocking the render that isn’t even needed for the current page.
Concurrent mode, suspense, islands, signals, server components, etc. are all attempts to solve parts of these problems but they are incredibly complex solutions to what seems like a core impedence mismatch.
Yes, it’s designed primarily around functional programming concepts not optimal rendering strategies.
No, that wouldn’t be a valid decision, because render performance is just not likely to be a performance bottleneck for your app.
Again, react is good enough for one of the most complex web apps in the world used by hundreds of millions of people every day.
If you have a reason to think your app is more performance sensitive than facebook.com, or has an entirely different performance story that react doesn’t serve well (such as the article we’re commenting on), then maybe.
For everyone else: re-render performance will not be your bottleneck.
I think React is fine for a lot of situations, and there are considerations other than performance that matter to people, but also for a small SPA, the sheer size of React adds a performance penalty that you’re not going to make up.
For example, I have a small, stupid password picker SPA at https://randpwd.netlify.app. (Code at https://github.com/carlmjohnson/randpwd.) It’s a self-contained page with all the SVGs, CSS, and JS inlined, and it comes it at 243 KB uncompressed. Webpagetest.org says loads in 1.5 seconds on 4G mobile.
You just cannot get that kind of performance with React. Which is fine, because that app is trivial, and a medium to large React app will make it up elsewhere, but I think a lot of people are making what are actually very small SPAs once you look past the framework, and losing a lot of performance because they don’t realize the tradeoff they’re making.
For starters, react is only adding 40kb of gzipped download to your bundle. I don’t care what it is uncompressed. Even on 4g that isn’t a huge deal.
Second, if you’re worried about performance you obviously are using ssr, so react does not block delivering and rendering your app, 4g or not. It only blocks interactivity.
If react is a non-trivial percent of your bundle and you’re trying to optimize the shit out of time-to-interactive for some reason, then yeah maybe it’s an issue. There are reasons I would sometimes not use react on mobile but payload size isn’t at the top of the list.
The page I’m talking about, which includes various word lists, is only 95 KB gzipped, so that would be pretty heavy penalty by percentage. Also 40 KB of JS is much “heavier” than 40 KB of JPEG, since it needs to be parsed and executed. The waterfall for the page shows 1.2s spent waiting on the download, and then .3s spent on executing and rendering everything else. My guess without doing the experiment is that a Next.js equivalent would take at least 2s to render an equivalent page.
Well yeah. This is the argument. The argument is that React makes it hard to optimize the shit out of TTI because you start out in a hole relative to other solutions. :-) A lot of time you don’t actually care about optimizing the shit of things, but when you do, you do.
You say “wait on download” but there’s no waiting. The user gets the ssr payload as fast as it can download and render it, right?
Not really? Have you optimized a site with webpagetest before? There are lots of factors that influence how quickly a site makes it to various benchmarks like TTI, LCP, on load, etc. It’s a whole thing, but the basic thing is to arrange the waterfall so as few things as possible are blocked and as many as possible are happening concurrently, but there’s still a lot of room for experimenting to make different trade offs. Even with SSR, you can screw everything up if you have render blocking JS or CSS or fonts in the wrong place.
Sort of? I’ve been doing perf work for decades but mostly in enterprise, so yes on intricate low-level perf work but no to webpagetest.
Sure but that has nothing to do with React. As in, React neither makes that problem easier nor harder.
We probably agree but don’t realize it. I’m just trying to make it so that someone following along in this comments section can understand the correct take-away here[^1], which is that React does NOT put you in a hole that’s hard to climb out of if you’re building an SPA, not even when catering to mobile users.
[^1]: because a lot of real harm is being done by write-ups that make it sound like using React is a terrible performance decision, as you can see from even just the other comments on this article.
I’m not sure I’d call Facebook super performance sensitive on the front-end, compared to, say, a WebGL game or a spreadsheet. At least, that was my impression as a production engineer there.
Sure, there’s definitely a class of application that is on a whole other plane of performance optimization that doesn’t even relate to DOM-based web APIs, like games, spreadsheets, code editors (e.g. Monaco Editor), design tools (e.g. Figma), etc. That’s a different kind of “performance sensitive.”
When I say “performance sensitive” I mean “there’s a lot of money on the line for performance and small gains can be measured and have high monetary reward, and dozens of brilliant engineers are being paid to worry about that.” I don’t mean the app itself is hard to make performant. facebook.com is actually VERY not sensitive to performance regressions in that sense: people want their dopamine squirt and are willing to put up with a decent bit to get it.
I’ll admit that what annoys me the most about the Javascript Ecosystem isn’t any particular individual tool. It’s that:
Nue doesn’t address proglem #1. Does it address problem #2 enough to warrant violating problem #1?
Note: I’m not being critical of you working on this. Building your own tools is fun I do it all the time. It’s more to inform whether I should be interested in taking a look at it or not.
Is 2 still true? It definitely has been a huge problem historically, but I feel like things have gotten better and I’m not fighting my tools quite as much anymore.
That said, Vue 2 to 3 was a huge PITA with a ton of churn. I had to drop vue-cli and switch to Vite just to get a decent build story as part of the upgrade path. I do feel like people should tell the JS world that ideally the number of times to bump the semver major version is zero, whereas in the JS ecosystem it’s expected that you break compatibility once every year or two for some reason.
Lock files help a lot but you still have problems when Node makes a breaking change in itself unless you lock your Node version too.
nvm or Volta has been used to specify Node versions for every project I’ve either started or joined in at least the last five years. Before that, I saw people lose days battling mysterious build errors only to find they weren’t using the magically correct version of Node. I wish a version manager and integration with the
engines
field (which really ought be created by default for apps) was built into Node.Go has added that for the most recent version, and it makes a lot of sense to me. It’s sort of odd how it’s been outsourced to random other tools like NVM and whatnot.
The forward compat issues usually arise around modules with binary deps where the locked version doesn’t build against a newer node version, or more rarely, it doesn’t provide binary versions and the source install doesn’t work on newer versions of Xcode on Mac due to e.g. clang setting more warnings as errors.
Nue is committed to semver, but as stated above more mature projects have less issues than brand new ones like Nue.
#2 relates to project maturity. Nue is definitely not mature and 3 month delay would probably break things (yet to see). React and Vue are obviously more stable.
Easy, just focus on the backend and let GPT4 write Vanilla JS frontend code. This method solves #2.
This is true.
Sometimes I wish we can go back to the old days where JS was ran in the browser just to add the interactivity.
But nowadays there are so many JS frameworks that it becomes which one to focus and each framework is trying into solve shortcomings of each other.
For example you take React then NextJS came a long which tried to solve a problem of routing , adding SSR and etc
Why can’t we just settle on one framework that does the job well
I hear you. JavaScript in particular suffers from package fatigue. For the answers:
It’s impossible to address the #1 problem with a new project. This is a chicken-egg problem. Something needs to be done to fix the bloated, complex situation of the frontend developer. It has to be something very different, but not a new project. I don’t really know how to address #1 without starting from scratch.
Not possible to address problem #2 without violating #1. Chicken-egg speaking here as well.
As usual this was all a misunderstanding: https://github.com/ziglang/zig/issues/16270#issuecomment-1615388680
and a/the conclusion: https://github.com/ziglang/zig/issues/16270#issuecomment-1616115039
As someone emotionally invested in the web, and who learned a lot from and liked Crockford in my early education, who thinks Svelte and TypeScript might be my BFFs, this doesn’t land. But technically maybe, because TypeScript has mainstream supplanted JS in dev polls, so I’m crossing my fingers for something like an optimized subset of TS or some AI-rewrite-it-in-Rust/WASM future or something else that doesn’t challenge my technology choices, which are working great for UX and DX.
This is a really silly conversation and I’m sad to see it keep coming up over and over again. The people building huge, performant web apps with hundreds of millions of users aren’t having this conversation.
The things that are slow about React and JavaScript applications are not bottlenecks in React or JavaScript.
I’ve done a lot of web performance work and it’s always the same shit on the UI side:
React doesn’t cause (or even encourage) any of this. The ecosystem and engineering happening around React are often garbage (at least in open source) and that’s the only reason these problems exist.
Look at how facebook.com, rebuilt in 2020 to be a modern SPA, works:
React doesn’t get in the way of doing that kind of good engineering. In Meta’s case, React made all the good behavior above easier to accomplish. If you do UI work and you use React, don’t worry: you haven’t been bamboozled. If your SPA performance sucks it’s not cause you’re using React or because you built a SPA.
One of Russell’s main claims is that React is being served to millions of users with low-powered devices for websites that are neither huge nor well-engineered. He talked about how Facebook has the resources to do it well, but most don’t, and there’s the snake oil. I think the case can get overstated, and that’s what this article addresses, but this sounds dismissive of a vast number of the world’s mobile users, and sounds like a palliative to the people building websites that uncritically and unnecessarily import hundreds of kbs of JS, which is very slow to parse on these devices.
Okay, let me make some concrete assertions:
If an engineering team, with any amount of resources, doesn’t care about performance, then no technology stack will help. The performance of their app will suck.
There are tools to drastically improve the performance of applications that no one uses. For example, Meta open sources Relay which is an absolute gem and no one uses it. Meta publishes specs and gives conference talks about GraphQL best practices. A lot of the engineering that went into the facebook.com rewrite has been open sourced and discussed publicly.
Snake oil is something that doesn’t work. The modern SPA tech stack is amazing and works. These people aren’t snake oil salesmen: they are philanthropists giving away gold that really works.
If you take most React SPAs and do the 5 most impactful things to improve performance, none of those things will require engineering resources your team doesn’t have and none of them will be switching away from React or SPAs. You just have to prioritize doing it. The economics of performance have to work. Etc. The tech stack isn’t the problem.
Ranting on a phone is hard.
Technology stacks can help here. They dictate what is easy and what is hard.
I like your other points - they are practical.
Batching is usually an anti-pattern because:
Batching was needed during the HTTP/1 years. With the adoption of H3, one socket can handle multiple requests.
None of that is true for GraphQL api data.
W.r.t. edge caching, you can rarely cache your db “at the edge” (fwiw I hate that term).
W.r.t. being bounded by the slowest response: you can stream JSON data down. At Meta a component that wants a list of things can request to have it streamed in. The http response is flushed as data is generated. EDIT: I should also add that it’s important you don’t block JS delivery and parsing on data fetching. This is very app specific but if you want a quick primer on best practices google “react render-as-you-fetch.” The tldr is don’t block UI on data fetching and don’t block data fetching on UI.
W.r.t. batching, for example you might have a modal dialogue that is powered by a tree of 60 components and every single one wants to know something about the user. Your GraphQL fragments are collocated with each component and roll up into one query. Fetching and returning the user data once in the query response is much faster than returning the user data 60 times.
Additionally, batching requests allows you to roll them up into compiled and persisted queries. When a person hovers that button and the client queries the server, Relay only needs to send the id of the query to the backend. This helps with reducing bytes over the wire, the client doesn’t need to build a complex query, and your backend GraphQL server doesn’t have to respond to arbitrary queries which can help with ddos. Batching graphql has huge advantages.
We cache GraphQL requests at the edge all of the time. It saves a lot of time and money. It has nothing to do with a database.
If graphql clients can parse a partial JSON response and hydrate a component, then I agree this solves it. However, I am pretty sure clients like apollo client wait for the entire response before parsing. In this latter case, streaming doesn’t do much if one query is taking significantly longer than the other queries. Maybe you make sure your batched queries have relatively uniform response times. I have had to resolve many issues where this was not the case.
You can compile and persist queries without batching too. If you do it with batching, you had better make sure your batches are consistent. I have seen architectures batch on demand which actually prevents the compile + persist benefits.
fwiw Relay actually also supports inverting things and you can defer a portion of the graphql response, making it asynchronously sent down later. Useful with React Suspense.
But we’re really in the weeds of specific use cases. The DEFAULT for anyone that cares about performance should be batched, persisted graphql queries (that start loading in the initial document GET) for anything rendering on page load. Full stop. It’s not an anti-pattern. Anything else is almost always much slower.
It’s funny because as I remember it, React was hyped precisely because it was faster than the alternatives. At the time, that might actually have been true - in frameworks like Marionette.js, Angular and EmberJS, large-scale updates would be very very slow unless somehow manually you would be able to aggregate the updates and calculate what really needed to be re-painted. React promised to solve this problem with its virtual DOM, which meant only things that were actually changed would get repainted, automatically.
Unfortunately, even with React it’s easy to accidentally trigger lots and lots of updates, especially when you use abstractions to help manage state. But it did show the way forward with JSX templating, virtual DOM and the reactive style of programming. Also note that the alternatives like Svelte, Vue and Preact which are mentioned in the original article came out after React.
The original selling point of react was not having a disconnect between the state your code believed the DOM was in, and the state it actually was in. The performance angle was just bragging that they’d managed to get an existing paradigm (call it immediate rather than retained mode for simplicity) working at useable performance atop a retained-mode reality (the DOM) using their v-dom.
Coming from a Rails dev, this sounds like it could have been written about Rails 10 years ago.
Maybe the question is:
What is it exactly that make programmers feel more productive with these technologies and what about this tech makes their perf bad? Is it a fundamental trade off between the two or can the gap be bridged?
The way this is written it sounds like the ability to make lots of updates is a productivity gain but also the main cause of perf issues.
If that’s the case it sounds like the answer won’t be found by incremental updates but by a paradigm shift. Which then begs the question. What does that look like and what’s the least painful way to get there?
Perhaps it’s because an inventor can only really focus on one, maybe two major breakthroughs at a time. React was focused on the virtual DOM and JSX templating, which were already paradigm shifts from the way things were done before. But React was also relatively low-level (not dealing with how to manage state), so I think the tools that were built on top to make it more productive/palatable are the things that made code so slow.
Only when tools started to be used at a different scale and to make fundamentally different kinds of applications as the original team was working on will you hit the limits. Even if a tool works perfectly to solve a particular problem a particular team was having, if other teams pick it up they will run into such things. Then the question is - is it worthwhile to try to improve the old tool, or build a new tool that integrates the worthy new ideas while focusing on a paradigm shift of its own?
Or maybe, just maybe, both are fine? I mean, to this day, people are still using Rails, and Rails has improved some of its worst parts while incorporating changes from other frameworks that came after. And there are newer frameworks that take things in entirely different directions, even dropping the approach Rails takes in some aspects.
JSX and vDOM are precisely the things that make React slow. HTML parsing in browsers is insanely fast. Moving it to JS makes it orders of magnitude slower.
I quoted JSX as a paradigm shift relative to the insecurity of string templating (see also this recent post) and the clumsiness of the native APIs (or even jQuery) for building HTML elements programmatically from JS.
The vDOM did make things faster when lots of things were updated - in those cases parsing isn’t the bottleneck, but the repainting of updated elements is. That was the whole point of React.
Why is that? SO has made the lives of so many programmers better, myself included. Surely it’s a good thing that the founders are able to profit by it?
It’s important enough that it shouldn’t be in the hands of a couple people. If it’s a company, it can be bought; if it can be bought, it can be destroyed. Look at freenode.
At least the data/content is licensed and made available in such a way to disincentivise any stewards of the site, current or future, from doing too terrible a job of it.
This problem is not limited to for-profit companies.
https://www.nonprofitissues.com/to-the-point/can-executive-director-stop-hostile-takeover
If something is important it absolutely should be in the hands of people who can profit from it, because otherwise it will be going away.
You’re commenting on a free site about a post someone wrote for free about maintaining software for free for nearly a decade and (my understanding of) your takeaway is that only extraction capitalism can possibly create sustained value.
Meanwhile musk is burning for-profit twitter to the ground and I can’t take my kids to Toys R Us anymore because hedge firms exist.
I think we need more nuance and understanding.
This site exists free of charge and advertising solely through obscurity. Pretending otherwise is disingenuous to say the least.
Every town in the Western world has churches which are non-profit entities. Whether you think they’re good or bad, they definitely exist and aren’t explicitly profit seeking. There are lots of ways to make a sustainable endeavor. The for profit corporation is one, but there are others.
I think you missed my forest for my tree.
Wikipedia? A seemingly close cousin to Stackoverflow.
I’m aware that it exists, and also that it would be the major exception anybody could mention. Stack overflow is not going to run on the “nag everybody who’s details we have for donations until the heat death of the universe” model.
Is your argument that Stackoverflow won’t or that it’s impossible and nobody should try? Because the OP said alternative, arguing against the latter, and the existence of Wikipedia stands to me as a beacon of what’s possible, inviting us to imagine and build better.
Ahhh, I love learning about graph algorithms. (A fun Discrete Math For Programmers class once suckered me into double-majoring in math, until I found myself in a mind-flaying hellscape of tensors and finite fields and had to retreat to CS.)
I am confused about the use of lazy algorithms here, because they don’t work when a dynamic value subscriber has side effects … and in an app the ultimate purpose of reactivity is to update things onscreen. If a reactive value is driving an HTML element, it has to be updated eagerly.
On the lazy algorithms, I think the integration with a UI library calls
.set()
(like in the lit integration) so the lazy graph gets driven in an eager context, but only the rendered parts of the state graph will be computed, and unused parts of the state graph remain lazily un-computed.Anecdotal evidence against “Why Duck Typing Is Safe” http://www.jerf.org/iri/post/2954
An
id
property is not really duck typing since it is implementing an identity of some kind. Just not a useful identity in this case.This is the first time I’ve seen a lawyer argue that a license that differentiates between commercial and non-commercial use is a thing that you can do. The other advice I’ve read suggests that it’s too fraught with corner cases to be allowed. For example:
Splitting the world into commercial and non-commercial activities is normally a problem that lawyers say is too hard.
Well, Creative Commons did it back in the day by adding the NC variants.. in an intentionally flexible way:
In their experience there weren’t many conflicts over the definition. So I guess (like was recently said about engineering) “commercial use is like pornography, I know it when I see it” – and that’s good enough.
In all honesty, the whole software licensing idea is a bit anoying and not as useful as most people think.
I put the MIT license on code I push to github because they force me to use a license. But in good truth, I have no way to enforce it in most cases. Nor would I care about ofenses in many cases.
I wish software authors would be less possessive about their code and put the focus on the code itself rather than overhead. I miss the days when one would post code online and whomever wanted would do whatever they wanted with it without bringing the boring licensing discussions to attention. Attribution would naturally occur to an acceptable level givena good cominity with rnouth well intended people.
I also don’t quite agree with the concept of paying for a copy of the software and not being able to do whatever one wants with it, within reasonable limits such as non-usurpation. I understand it is a reality today and perhaps even the most adapted to today’s economy, but it is a practice that should be questionable. Is it really ethically correct? I don’t think so.
For me, licenses are not for the code publishers, but rather the code consumers.
If you publish code without a license, then in my jurisdiction it’s technically copyrighted by default. I’m legally not allowed to use it at all, and open myself up to legal liability if I do. After I make my millions, how do I know you won’t some day take me to court and demand a percentage of that? By putting a license on your code, you’re giving people peace of mind that you’re not gonna turn around and try to sue them later.
Agreed. At work a few years ago I copied-and-pasted a page of useful code from a gist I found on GitHub, including the comment identifying the author, and I added a comment saying where I got it from.
Before our next release, when we had to identify any new open source code we were using, I added a reference to that file. The legal department then became worried that there was no license associated with it. Someone ended up tracking down the author and asking him, and he assured him he had no claim on it and put it in the public domain.
Unfortunately, this attitude only leads to mass exploitation of developers and enrichment of corporate interests.
The world is full of assholes who will take advantage of the free work and good will of others and give nothing back.
The world is also full of useful idiots who will give over their stuff to the aforementioned assholes and then, years later after discovering that you can’t pay rent with Github stars or HN posts, cry and piss and moan about how they were ripped off.
So, yeah, licenses are important.
You can’t “exploit” someone by taking [a copy of] what they’re giving away for free. Free means free.
If you create stuff and don’t charge money for it but have the expectation that people will give you money for it anyway or at least recompense you somehow_ … then you are either living in a small traditional village culture, or an anarchist commune. In both of those environments there is such a social contract*. If you’re not, you are indeed an idiot, unless you encumber your software with a license that forces such recompense.
I don’t believe most open source contributors who don’t use copyleft licenses are idiots. I believe they genuinely make their software available for free and don’t expect to see a dime directly from it.
In my case I do so to give back to the world, and because having people use and appreciate what I’ve made makes me feel good, and because it enhances my reputation as a skilled dude to whom my next employer should pay a handsome salary.
* I highly recommend Eric Frank Russells’s 1940s SF story “…And Then There Were None”, about a colony planet that adopts such a society, inspired by Gandhi, and what happens to a militaristic galactic empire starship that rediscovers the planet.
I would argue that you absolutely can if you take something offered freely, make a profit at it, and do not somehow pay that back to the person who helped you out. It’s somewhat worse for many maintainers because there is active pressure, complaining, and hounding to extract still further value out of them.
Not idiots–useful idiots. It’s a different thing.
I think there is for many of us a belief that we give away our software to help out other developers. I think of neat little hacks I’ve shared specifically so other devs don’t ever have to solve those same problems, because they sucked and because I have myself benefited from the work of other devs. This is I would argue an unspoken social compact that many of us have entered into. That would be the “not directly see a dime” you refer to, I think.
Unfortunately, it is obvious that as a class we are not recouping the amount of value we generate. It is even more painful because it’s a choice that a lot of developers–especially web developers, for cultural and historical reasons–sleepwalk through.
Consider Catto and Angry birds, right? Dude wrote Box2D (without which you don’t really get Angry Birds as a physics game) and never saw (as reported anyways) a red cent of the 12BUSD in revenue they booked in 2012. That’s insane, right? There’s no world in which that is just.
(One might argue “@friendlysock, ours is not a just world.” In which case, sure, take all you can and give nothing back, but fucking hell I’m not gonna pretend I don’t find it in equal measure sad and offensive.)
Our colleague I’m responding to is exactly that sort of person that a company, investor, or founder loves–yes, yes, please, don’t think too hard about licenses, just put your work in the public domain! Don’t worry your pretty little head about getting compensated for your work, and most certainly don’t worry about the other developers you put out of a job! Code wants to be free, after all, and don’t fret about what happens to development as a career when everything we need to write has either been written or can be spun whole-cloth by a handful of specialists with the aid of GPT descendants!
I suspect our colleague means well, and lord knows I wish I could just focus on solving neat problems with code, but we can ill afford to ignore certain realities about our industry.
Nah, I’ve published MIT stuff, and my take is - go for it, commercialize the hell out of it, you don’t have to pay me anything.
The point of MIT is to raise the state of the art, to make the solution to a problem universal. That includes corporations. No reciprocity is required: the code being there to be used is the point of releasing it.
I assume the definition your link refers to is “to make use of selfishly or unethically,” because the others don’t fit. But if someone offers you a thing with explicit assurance that you can use it freely without encumbrance (except maybe a token like thanking them in a readme), and you do so, how is that exploitation?
Feudal lords exploited peasants because the peasants had no choice but to work the lord’s lands for subsistence, or leave and starve. That has nothing to do with open source developers. No one is forced or coerced into releasing code freely.
If that’s the social compact you want, then for gods’ sake choose a license that expresses it. Choose an Old Testament eye-for-an-eye license (GPL) not a New Testament “turn the other cheek” license (MIT et al).
Dude, you and I are in the same class. I’m sure we have comparable skill sets. I went to work for The Man after school, and in exchange for writing what The Man wants all day, I make good $$$. I don’t know what you do exactly, but if you aren’t getting paid for code then I guess you’re either working at something you like better and coding as a hobby, or you aren’t tied to the capitalist treadmill at all and get to code whatever you choose all day; I don’t know. But you probably have your compensations.
I do know that it is super unlikely that there is a class of impoverished coders out there unable to find good paying jobs. Tech companies like the one I work for are desperate for talent. In the ten years I’ve been at this job I have witnessed how effin’ hard it is to find good programmers. Most of the ones the recruiters turn up are deeply mediocre, and we give up and hire the best of a mixed bunch. We have gone to great lengths like filing H1-b visas and dealing with six months or more of government bureaucracy hell, just to get one mostly-competent programmer from a distant country. In fact most of the people we hire are from outside the US, because seemingly all the local developers know nothing except React or whatever trendy web junk is popular these days … not the CS fundamentals we need.
In a crazy seller’s-market for coding skills like this, I refuse to listen to Marxist arguments about exploitation of the working classes. That is not the world I have seen in my 40 years in this industry.
Well, that’s exactly the point of the article. If you don’t want to be exploited, don’t use MIT, but instead use this or this license.
I think you overestimate how common those “idiots” are (I disagree that the world is “full” of them as snej explains in the sibling comment), maybe due to the occasional cases that get a lot of attention, and I think you underestimate how a spirit of giving can benefit the commons, for genuine non-financialized benefit to the giver and others. Copyleft hasn’t solved the domination problem, and with AI-(re)written code being a likely dominant future force, I won’t be surprised to see license relevance decline. There’s other approaches to the world’s problems than licenses, and maybe in some cases restrictive licenses trap us in local minima.
I feel similarly, in terms of over-focusing on licenses, and I don’t care what the not-well-intentioned people do with most of the code I put online; not that I would never speak out but life’s too short and I’d rather focus on other ways to convey my values and have a positive impact. (this isn’t a statement against other people using copyleft or non-commercial, I still consider using them in some cases) Two licenses that might fit those goals better than MIT are the public domain Unlicense and WTFPL.
With the future looking like it’ll be full of AI-assisted code trained on every open codebase, we need solutions other than licenses more than ever. “Computer, generate me a program in Zig that passes the LLVM test suite in the style of Fabrice Bellard.”
Also the Zero-clause BSD License (0BSD).
Problem with some licenses like Unlicense is that not all jurisdictions allows you to voluntarily place your work under public domain, so in such jurisdictions that license is void.
Thanks for pointing that out, do you know of the best alternative? The Unlicense Wikipedia page says the FSF recommends CC0 instead.
From Wikipedia on CC0:
The Unlicense also intended to do exactly that. The “Anyone is free…” and the “AS IS” paragraphs are the fallback.
While the FSF recommends the CC0 for non-software content, they do not recommend it for software. The OSI has similar concerns.
Jim Weirich (author of rake, rest in peace) used the MIT license for most of his work but a few smaller projects used this simple license:
It’s important to grant at least some license, otherwise (as I understand it) in the US you do not have any rights to make copies of the work unless you are the copyright holder or are granted a license. There is a lot of old software in the world where the author has passed away or otherwise moved on, without ever granting an explicit license, leaving the software to sit unused until the copyright expires.
(I am not a lawyer and this is not legal advice)
What happens if you copy paste a 20 line script from a blog and include it in the project of a product you make in the context of a private company of yours which doesn’t publish its code?
It’s not like the open source police will read all your source files and search line by line to try to find it out there on the web. If anything, most companies a ton of low quality code that no one wants to look at.
I think you are making the point that a license does not in practice restrict someone from using your code under terms not granted by the license; I agree.
You wrote that you wished “software authors would be less possessive about their code and put the focus on the code itself rather than overhead”. I also agree with that sentiment, but I do not believe that implies publishing code “without bringing the boring licensing discussions to attention” (which I interpreted as “without a license”) is the best path to putting the focus on the code.
The most common thing that I see is a pair of products. Product Community Edition is MIT or BSD or AGPL or, occasionally, GPL, and comes with a git repo and a mailing list, and a refusal to take patches unless accompanied by an IP transfer. It’s always free.
Product Business Edition or Enterprise Edition is licensed on commercial terms and includes at least one major feature that businesses feel is a must-have checkbox item, and some amount of support.
I used to see a bunch of open source products where the main (usually sole) dev sold a phone app that went with the product, in order to raise some money. That seems less popular these days.
As you and I have discussed here before, it is quite reasonable to talk about Free Software licenses which are effectively non-commercial. The licenses I enumerated at that time are uniform in how they would answer your questions: yes, all of those things are allowed, but some might be unpalatable to employers. Pleasingly, on your third point, a company would be afraid to try to use Free Software provided under these licenses, even for purposes outside their charter of commerce.
I got something slightly different from reading the post; it’s not “you can differentiate between commercial and non-commercial” in a license; it’s “if you want to differentiate between commercial and non commercial then don’t dual-license using the MIT license because that creates ambiguity”.
Just to be pedantic, it doesn’t create ambiguity. MIT pretty much lets anyone use it, where your intention was probably not that. Therefore, the issue isn’t ambiguity, it’s redundancy.
I don’t see why one couldn’t write a software license that differentiates between commercial and non-commercial use, using whatever criteria the license writer wants for edge cases. That will probably end up not being a free software license - a license that attempts to judge what kinds of uses of software count as “commercial” and legally forbid them limits user freedom to use that software in a way incompatible with the notion of free software - and this will affect free software advocates’ willingness to use software licensed under such terms. But there are plenty of non-free software licenses in this world, what’s one more?
The piece ends with:
As deflating as it might be, I agree with this, with perhaps a more abstract interpretation of “regulation” than many readers. We may jump to thinking about state control and other top-down reactive power, but that’s only one form, and I’m curious about the other possibilities.
For example we can imagine democratic organizations that are sufficiently self-regulating, especially if the demos is expanded to include users, as in platform cooperatives. In this scenario, regulation itself can be more decentralized with thoroughly aligned incentives and ongoing democratic negotiation and accountability, which, as the author says, is politics – not always energizing to would-be creators of these systems! But happily there are plenty of good-hearted wonks and negotiators among us.
It’s almost like Google is acknowledging that a browser isn’t the best place for non-trivial applications.
I have yet to encounter any GUI API (web or otherwise) that does digital typography decently. Does anyone know of one that, for example, supports the notion of a baseline grid? See also: font size is useless; let’s fix it.
Because digital typography for the masses is a bad idea. The whole concept of showing a paper page on a screen as a canvas (no pun intended) and use typographic elements as your artist brush is intricate per se.
I think the average Joe would be better served with something in the lines of markdown if only it was what they first had exposure to. WYSIWYG editors have this aura of being simple and direct but their complexity explodes in your face after less than a handful of elements.
what exactly do you mean by GUI API?
I’m using “API” as an umbrella term. Over the years I’ve played with a variety of tools, sometimes called “APIs” or “SDKs” or “toolkits” or, in the case of the web, an amalgam of standards… which include APIs. Whatever you call them, I’m thinking of tools developers use to build software applications with graphical user interfaces (GUI). Here are some examples of what I mean:
There are others that I’m curious about but am less familiar with (SwiftUI comes to mind). I’m genuinely curious to know if any of them give developers the means to lay out text using principles that have established in the graphic design world for almost a hundred years now by luminaries such as Robert Bringhurst or Josef Müller-Brockman. All of the tools I’ve used seem to treat typography as an afterthought.
I think that’s overly pessimistic. The specific problem here is trying to embed one docment layout system in another. Few apps need to customize the specifics of eg. text layout to nearly the same extent as Google docs.
And though I empathize with the idea that it needn’t be this way, I haven’t found many better systems for application distribution than the web. Though maybe I really do just need to sell my soul to QT.
Your argument about “I haven’t found many better systems for application distribution than the web” is somewhat defeated by the very nature of web browsers.
Google distributes an application to multiple platforms with regular automated updates. It’s called Chrome. It’s a POS memory hogging privacy abusing whore of satan, but that’s not really related to it being native or not - Google manages to push those qualities into browser based ‘apps’ too.
But instead of knocking a few layers off the stack and starting again from something akin to the
webrender
part of what would have been Servo, they’re just re-inventing a lower layer on top of the tower of poop that is the DOM.Not exactly. From the source article:
Yeah, that’s weird. Was the whole article a big joke?
I thought there were a few strange claims and editorial choices in it, but overall it seems like a good summary.
Neat. One thing to note about the
<details>
tag is that all of its contents will be rendered on the DOM, even if not displayed - which is what you’d expect. But often for various reasons, an implementation will want the contents lazily rendered, only mounted when the details are expanded, and the builtin tag doesn’t offer this.There’s actually a really great benefit here that isn’t talked about as much as I think it should be, search. Lazy rendering of DOM renders the built in search of the browser kinda useless. The contents of
<dialog>
being searchable means I can search through menus and more without requiring additional JS search.Good point, plain text should normally be eagerly rendered because of this - lazy loading is good for heavier resources like images, and sometimes components for behavioral reasons.
I still feel like I’m stuck in mid 201X regarding my frontend skills. To be honest I just don’t want to invest the amount of time it’ll take to get fluent in a “modern” stack, so some basic vue/react with bootstrap it is. And the moment you’ve started doing something for frontend, it’ll already feel like you’re outdated and behind (svelte)..
Dude, don’t worry about it, that stack is 100% fine, and not outdated. Getting sucked into FOMO about JS frameworks and tooling is a total trap. If you’re not a full-time frontend engineer, use whatever gets the job done.
Once you feel like learning something new would be a fun way to spend two weekends, go for it.
Tailwind is awesome for example, but there’s not that much to it. It’s just some nice atomic utility classes, but that means you build all the component styling yourself (buttons, cards, …) instead of using the ready-made bootstrap abstraction.
Wow didn’t fully read your comment and just now noticed we both mentioned tailwind! I’m so addicted to it!
I agree, it somehow hits exactly the right level of abstraction. For me it nudges me into some possibilities I would never have tried with bootstrap or pure CSS.
For example last year I had some really opinionated vision for a travel planner UI that would have been completely boring and bad with just prefabricated components: https://gllvr.com. I’m sure my implementation is still kind of rubbish for a lot of use cases, but I couldn’t even imagine doing it with bootstrap/bulma, etc.
I really like the UI on that travel planner.
I’m sure it has more to do with the way my background has warped my mind than with anything inherent to either approach, but I found it easier to build buttons/cards/etc with these utilities than I did to get the ready-made ones to look/work the way I wanted them to.
I would have found it devilishly hard to get that striped component in your planner (where you click on the left side to type in where you’ll be sleeping or click on the right side to copy the previous night’s location) to be anything like what you made in bootstrap. I do suspect there are people out there who wouldn’t find it so, though.
That’s a great UI!
I agree, tailwind makes me more likely to experiment and try new things too.
With bootstrap you’re too often locked in to how a certain component works, and it’s really hard to change the way components behave.
It has given me a second wind with frontend stuff, and I’m actually enjoying making websites again!
Vue, React, Angular, Svelte, and most frontend frameworks since React and Angular 2, are modern UI component frameworks. Once you understand components deeply, the learning curve for any of the others is much shorter. Svelte may be well designed and easier for beginners than most, but part of why people report picking it up in an afternoon is that they already understand components.
The details differ between frameworks, especially with the direction React has gone with hooks, but despite the proliferation of frameworks in recent years, there’s been a substantial convergence on components since the days of Backbone, Angular 1, jQuery, Knockout, and all of the other not-quite-component frameworks. They may have been fewer in number back then, but the fundamentals varied widely across tools. The situation today is much more approachable despite the memes.
I find react to be quite horrible if you want to use or do stuff that doesn’t exist for it as a lib. (Also don’t get me started on the amount of packages and reported CVEs in a hello world..)
Really? I generally don’t use any React-specific libraries, and React itself I’m sure has few or no dependencies (I use Preact most of the time, so I’m not sure the state of React). Are you talking about create-react-app? I’ve never used it myself, it seems totally unnecessary.
I’ve been using bootstrap for years, and I loved it but some things just didn’t feel quite right.
I’ve recently switched to tailwindcss and it has made me so happy. Doing anything is just better and feels more fun. Also you don’t end up with loads of custom CSS.
If you switch away from bootstrap I can almost guarantee your life will be better :)
This is the post that finally changed my mind:
https://adamwathan.me/css-utility-classes-and-separation-of-concerns/
EDIT: tailwind is really easy to learn if you’re worried about that. Also, the documentation is amazing
That post, plus about 20 minutes with this tutorial persuaded me that I was interested in giving tailwind a real try.
I found that having one workspace with two browsers tiled next to each other, one with my site and one with the tailwind docs, and a second with the code open, made it really fast and enjoyable to try things out. The search on the tailwind documentation is especially good, and the live updates that come with svelte running in dev mode are very snappy.
It’s actually pretty high on my list to dig in and see just how those live updates work. There are a couple of spots in my own code where I could use a similar mechanism to very good effect, assuming it’s not leaning on some heavy mechanism that gets shaken out in the production builds.
I was stuck with jinja + a little jquery for my front end. So state of the art 2008? It was starting to slow my ability to explore some ideas in a way I wanted to share. I don’t think I’d have been motivated to spend 30 hours exploring this for a couple of weeks if I had a good grasp of vue and bootstrap.
The feedback from changing something on the server in dev mode to seeing it in the client is so much faster than it was when I was writing jinja templates and sending them in response to form submissions. That’s one of the major improvements of moving for me, and I think vue/react/bootstrap would’ve gotten me that also.
This stack just lined up with my mental model a little better, which meant a lot as I was trying to learn a pile of “new” all at once. Tailwind’s utility structure combined with the way styles are scoped by default in svelte made it easier for me to think about my UI layout than it ever has been for web stuff.
Has anyone used Svelte and can give a small comparison between this and the other popular frameworks right now? (Vue and React I guess?)
I’m making a small web interface and I think it could use some… interactivity; I tried with React because that seems like the most popular and the best thing to put in my CV but it’s been confusing so far.
Svelte code usually needs fewer characters to accomplish the same thing and its output is typically smaller and faster. (not always, and Vue may win some benchmarks nowadays) On the more subjective side of things, I find there’s less abstraction and a simpler mental model for working with components and the DOM. There’s a lot of power and flexibility behind its apparently simple syntax and features - animations in particular are nice coming from React - but it makes some tradeoffs to prefer statically analyzable constructs over unfettered runtime flexibility. (e.g. things like mapping over and introspecting props and slots are unofficial hacky APIs at the moment) In practice I haven’t been hampered but YMMV. Some open issues might fill in these gaps.
It’s closer than most frameworks to working directly with plain JS/HTML/CSS, and it’s sometimes called an unframework because of how it compiles its components to simple JS with imported helpers, not too different from optimal hand-written vanilla JS, but with modern declarative components. I wrote this fairly comprehensive overview with React and Vue in mind a few months ago.
I agree React and Vue are both better for your CV. Svelte might stand out for some people.
The official tutorial walks you through a lot of the basics. An hour or two with it and the examples should give you a good taste for it.
Wow, that’s a new level of wtf. I wouldn’t chalk that particular error to Node - mislabeling source files is pretty much an “developer is gravely mistaken” error.
The more I think about it, I wonder if the github author first took a node project skeleton then dumped or rewrote a section in typescript and didn’t update the README.
A mislabeled typescript file put erroneously into a stale project skeleton is hardly a fair experience, except for “how a developer with a stale readme can cause pain”.
My guess is that it was Flow rather than TypeScript. They have very similar syntax for type declarations, but Flow was mostly written with a .js extension, whereas TypeScript will issue a compile error if types are embedded into a .js file using the typical syntax. (You can use TS with types in a .js file, but only in comments.)
That still merits a WTF from me. Why do the Flow developers think it’s okay to appropriate the extension of a different, closely-related file type?!
Because the ambition was to have Flow as a superset of JavaScript, which would allow teams to gradually introduce static typing to their existing codebase.
I’m not going to pass judgment on whether or not I think this is a good idea. However, this was devised by some of the highest-paid programmers in the world (since they work[ed] at Facebook), which I think validates the points about the JavaScript ecosystem that Lea Verou was making in her article, and the points that I’ve made elsewhere in this thread.
You’re right to be horrified, but it’s not the developer’s fault, I’d bet. The reason it’s
.js
extension (if it’s TS and not flow) is likely to be the difficulty they had in configuring their build toolchain, which is a constant thorn in the side of all of us JS devs with everything constantly shifting underneath us. And almost every single npm package in existence is filled to the brim with content that has no business being in the built+published artefact. NPM the tool makes publishing the correct content incredibly difficult, if you’re doing any sort of build-time tooling at all.So AFAICT this is the tradeoff the author consciously rejects, and the one that Svelte consciously chooses:
The disadvantages of this tradeoff that weigh on the author’s mind:
I think it’s interesting to look how Elm solved these, because like Svelte, it is compiled ahead of time to small and fast JavaScript that doesn’t resemble the source code.
Elm’s solution to ‘you have to choose between debugging the runtime JS or the source code’ is to go all-in on making it easy to debug the source code. In Elm’s case, it is an ML-family language with a type system that guarantees zero runtime errors (but won’t save you from domain mistakes, obv.), and with compilation error messages that are so helpful that they have inspired many other languages.
Svelte, presumably, wants to remain Javascript, so a lot of error prevention becomes harder. They mentioned they want to add Typescript support. Or they could add source maps that relate compiled JS to the original Svelte code? Also, debugging compiled projects is a very old craft, it only really gets onerous if the problem is low-level or compilation is slow. I also note that Svelte compilation has a ‘dev’ flag that produces named functions, and also extra code that performs runtime checks and provides debugging info.
Elm’s solution to the interoperation problem: an Elm module can expose ports (blog post, docs that external JS can send messages into, or that JS can subscribe to. So the ports form an Elm module’s public API.
That still leaves the interop problem of styling the created components. If it’s you writing the Svelte, you can let Svelte do the styling (if Svelte is the whole system), or specify the right class names on the created DOM (if you’re writing the Svelte component as a subsystem). But if you’re reusing sombody else’s Svelte component, I’m not sure how easy it is to pass in the class names you’d like the component to use. Perhaps ‘support for caller-specified class names’ is even an open problem / blind spot in frontend frameworks in general?
Good summary.
In one sense, all of the hard-fought web knowledge that people may subconsciously pride themselves on knowing now becomes a stumbling block. Technologies like Svelte treat the underlying web runtime as something to be papered over and ignored. Much like compiled C, being able to debug the generated code is a needed skill, but the 1-to-1 correspondence between source code and generated code is not a guarantee, and it can be disconcerting to let go of that.
I’m all for it. We largely ignore x86/x64 by using higher level languages and our code is better for it, even if slightly inefficient.
Web devs love to talk of developer experience and progress in tooling. Something something…Cambrian explosion? ;)
I think the author’s problem isn’t so much with it being compiled, but the fact that the source code looks like JS, but your assumptions don’t hold because there’s a lot happening to that JS so the end result isn’t anything like what you typed.
Reminds me very much of the Duck Test https://en.m.wikipedia.org/wiki/Duck_test. Svelte walks like JS and talks like JS but isn’t JS. This is typically seen as a positive for those who judge their tools, at least partly, based on familiarity.
Yes, I agree. Elm is a language that has its own semantics which are adhered to by its compiler. But Svelte takes the semantics of an existing language (JS) and changes them.
I have that concern about Svelte too, though it’s not strong enough to change the fact that I’m still a fan and excited to see how Svelte evolves.
That makes the article make more sense. That would be difficult to reckon with.
Svelte creates JS and CSS source maps on compilation - https://svelte.dev/docs#svelte_compile
There’s also the
@debug
helper for templates - https://svelte.dev/docs#debugIn practice I’ve found debugging Svelte to be mostly trivial and sometimes difficult. Dev tools will help close the gap but they’re not mature.
For styling, style encapsulation is what I’ve seen the devs recommend, but nothing stops you from passing classes as props to components that accept them. (I do that a lot because I like utility CSS libraries like Tailwind) The biggest open RFC right now is about passing CSS custom properties (CSS vars) to components. https://github.com/sveltejs/rfcs/pull/13
I think the tradeoff here isn’t about source mapping and the sort, but instead that if you take it as given that you’re going to compile your language, then you might as well through more language safety features in (a la Elm).
That might be true, but the other sacrifice is familiarity. Svelte can be learned by a frontend dev very quickly and without much “relearning” fear. Instead, you get the cognitive dissonance problem of it being almost what you expect but, then, not quite.
There’s a big leap from Svelte to Elm beyond just compiling the language. Elm has tremendous benefits, definitely, but it gives up seamless interop with the DOM, mutable web APIs, JS libraries, and future web standards. Elm has JS interop but only asynchronously through its ports. Purity and soundness are great but at what cost? (that’s a rhetorical holy war question, not worth discussing here IMO!)
I think TypeScript has been gaining so much adoption largely because it makes pragmatic compromises everywhere, not just because people are resistant to learning Elm/Reason/PureScript/Haskell/etc, and when support lands in Svelte I’ll be less shy about recommending it to people.
Yeah, I think for most people in most cases, familiarity and ease are the bigger win. I’m not arguing one should use Elm, just laying out that continuum.
Thanks for emphasizing that point. I think I underestimate the impact of familiarity and ease for many people.
By seamless interop do you mean synchronous? I started trying out Svelte yesterday and found the dom interop to not be seamless as I was confused by the difference between
<input value={val} />
and<input on:value={val} />
I think from memory that’s how you get an interactive input.
I meant seamless but that’s overstating it. (except for mutable web APIs and JS libraries - interop there is generally seamless because of how Svelte extends JS) Anything that isn’t plain HTML/CSS/JS is going to have seams. Svelte minimizes them to a degree that some other frameworks don’t, like React and especially Elm. Vue is on similar footing as Svelte.
The nice thing about Svelte’s seams is they often reduce the complexity and verbosity of interacting with the DOM. In your example, it sounds like you want:
<input bind:value={val} />
(or simply
<input bind:value />
if that’s the name)At the same times Svelte gives you the flexibility to optionally use a “controlled input” like React:
<input value={val} on:input={updateValue} />
The equivalent in plain HTML/JS is not as pleasant. Elm abstracts away the DOM element and events.
Author here, just wanted to make a note. This isn’t written to hype a battle in the holy war. Frontend frameworks are a positive sum game! Svelte has no monopoly on the compiler paradigm either. Just like I think React is worth learning for the mental model it imparts, where UI is a (pure) function of state, I think the frontend framework-as-compiler paradigm is worth understanding. We’re going to see a lot more of it because the tradeoffs are fantastic, to where it’ll be a boring talking point before we know it.
Thanks for this. It’s refreshing to hear a grounded perspective when it comes to frontend technologies. Now I should actually read the original article…