The lesson here sounds more like “bad protocols will make your client/server system slow and clumsy”, not “move all of your system’s code to the server.” The OP even acknowledges that GraphQL would have helped a lot. (Or alternatively something like CouchDB’s map/reduce query API.)
I don’t really get the desire to avoid doing work on the client side. Your system includes a lot of generally-quite-fast CPUs provided for free by users, and the number of these scales 1::1 with the number of users. Why not offload work onto them from your limited and costly servers? Obviously you’re already using them for rendering, but you can move a lot of app logic there too.
I’m guessing that the importance of network protocol/API design has been underappreciated by web devs. REST is great architecturally but if you use it as a cookie-cutter approach it’s non-optimal for app use. GraphQL seems a big improvement.
Your system includes a lot of generally-quite-fast CPUs provided for free by users
Yes, and if every site I’m visiting assumes that, then pretty quickly, I no longer have quite-fast CPUs to provide for free, as my laptop is slowly turning to slag due to the heat.
If your browser actually keeps all those tabs live and running, and those pages are using CPU cycles while idling in the background and the browser doesn’t throttle them, I can’t help you… ¯\_(ツ)_/¯
Yes, but assuming three monitors you likely have three, four windows open. That’s four active tabs, Chrome put the rest of them to sleep.
And even if you only use apps like the one from the article, and not the well-developed ones like the comment above suggests, it’s maybe five of them at the same time. And you’re probably not clicking frantically all over them at once.
There is often one specific tab which for some reason is doing background work and ends up eating a lot of resources. When I find that one tab and close it my system goes back to normal. Like @zladuric says, browsers these days don’t let inactive tabs munch resources.
I don’t really get the desire to avoid doing work on the client side.
My understanding is that it’s the desire to avoid some work entirely. If you chop up the processing so that the client can do part of it, that carries its own overhead. How do you feel about this list?
Building a page server-side:
Server: Receive page request
Server: Query db
Server: Render template
Server: Send page
Client: Receive page, render HTML
Building a page client-side:
Server: Receive page request
Server: Send page (assuming JS is in-page. If it isn’t, add ‘client requests & server sends the JS’ to this list.)
Client: Receive page, render HTML (skeleton), interpret JS
Client: Request data
Server: Receive data request, query db
Server: Serialize data (usu. to JSON)
Server: Send data
Client: Receive data, deserialize data
Client: Build HTML
Client: Render HTML (content)
Compare the paper Scalabiilty! But at what COST!, which found that the overhead of many parallel processing systems gave them a high “Configuration to Outperform Single Thread”.
That’s an accurate list… for the first load! One attraction of doing a lot more client-side is that after the first load, the server had the same list of actions for everything you might want to do, while the client side looks more like:
fetch some data
deserialize it
do an in-place rerender, often much smaller than a full page load
(Edit: on rereading your post your summary actually covers all requests, but missed how the request and response and client-side rerender can be much smaller this way. But credit where due!)
That’s not even getting at how much easier it is to do slick transitions or to maintain application state correctly across page transitions. Client side JS state management takes a lot of crap and people claim solutions like these are simpler but… in practice many of the sites which use them have very annoying client side state weirdness because it’s actually hard to keep things in sync unless you do the full page reload. (Looking at you, GitHub.)
When I’m browsing on mobile devices I rarely spend enough time on any single site for the performance benefits of a heavy initial load to kick in.
Most of my visits are one page long - so I often end up loading heavy SPAs when a lighter, single page optimized to load fast from an un sched blank state would have served me much better.
But that’s almost exactly what the top comment said. People use framework of the day for a blog. Not flattening it, or remixing it or whatever.
SPAs that I use are things like Twitter, the tab is likely always there.
(And on desktop i have those CPU cores.)
It’s like saying, I only ride on trains to work, and they’re always crowded, so trains are bad. Don’t use trains if your work is 10 minutes away.
But as said, I acknowledge that people are building apps where they should be building sites. And we suffer as the result.
What still irks me the most are sites with a ton of JavaScript. So it’s server-rendered, it just has a bunch of client-side JavaScript that’s unused, or loading images or ads or something.
You’re ignoring a bunch of constant factors. The amount of rendering to create a small change on the page is vastly smaller than that to render a whole new page. The most optimal approach is to send only the necessary data over the network to create an incremental change. That’s how native client/server apps work.
In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer then maybe sending a “whole new page” consisting of 200 kb of static HTML upon submitting a form would be more optimal.
In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer
This is hyperbole. Sending a ‘“whole new page” of 200 kb of static HTML’ has your userspace program block on the kernel as bytes are written into some socket buffer, NIC interrupts the OS to grab these bytes, the NIC generates packets containing the data, userspace control is then handed back to the app which waits until the OS notifies it that there’s data to read, and on and on. I can do this for anything on a non-embedded computer made in the last decade.
Going into detail for dramatic effect doesn’t engage with the original argument nor does it elucidate the situation. Client-side rendering makes you pay a one-time cost for consuming more CPU time and potentially more network bandwidth for less incremental CPU and bandwidth. That’s all. Making the tradeoff wisely is what matters. If I’m loading a huge Reddit or HN thread for example, it might make more sense to load some JS on the page and have it adaptively load comments as I scroll or request more content. I’ve fetched large threads on these sites from their APIs before and they can get as large as 3-4 MB when rendered as a static HTML page. Grab four of these threads and you’re looking at 12-16 MB. If I can pay a bit more on page load then I can end up transiting a lot less bandwidth through adaptive content fetching.
If, on the other hand, I’m viewing a small thread with a few comments, then there’s no point paying that cost. Weighing this tradeoff is key. On a mostly-text blog where you’re generating kB of content, client-side rendering is probably silly and adds more complexity, CPU, and bandwidth for little gain. If I’m viewing a Jupyter-style notebook with many plots, it probably makes more sense for me to be able to choose which pieces of content I fetch to not fetch multiple MB of content. Most cases will probably fit between these two.
Exploring the tradeoffs in this space (full React-style SPA, HTMX, full SSR) can help you come to a clean solution for your usecase.
I don’t really get the desire to avoid doing work on the client side.
My impression is that it is largely (1) to avoid JavaScript ecosystem and/or* (2) avoid splitting app logic in half/duplicating app logic. Ultimately, your validation needs to exist on the server too because you can’t trust clients. As a rule of thumb, SSR then makes more sense when you have lower interactivity and not much more logic than validation. CSR makes sense when you have high interactivity and substantial app logic beyond validation.
But I’m a thoroughly backend guy so take everything that I say with a grain of salt.
Edit: added a /or. Thought about making the change right after I posted the comment, but was lazy.
(2) avoid splitting app logic in half/duplicating app logic.
This is a really the core issue.
For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it. GraphQL is an attempt to cut down on how much work this is, but it’s always going to be some amount of work compared to just creating a context dictionary in your controller that you pass to the HTML renderer.
However, for a team that is big enough to have separate frontend and backend teams, using a SPA decreases the amount of communication necessary between the frontend and backend teams (especially if using GraphQL), so even though there’s more work overall, it can be done at a higher throughput since there’s less stalling during cross team communication.
There’s a problem with MPAs that they end up duplicating logic if something can be done either on the frontend or the backend (say you’ve got some element that can either be loaded upfront or dynamically, and you need templates to cover both scenarios). If the site is mostly static (a “page”) then the duplication cost might be fairly low, but if the page is mostly dynamic (an “app”), the duplication cost can be huge. The next generation of MPAs try to solve the duplication problem by using websockets to send the rendered partials over the wire as HTML, but this has the problem that you have to talk to the server to do anything, and that round trip isn’t free.
The next generation of JS frameworks are trying to reduce the amount of duplication necessary to write code that works on either the backend or the frontend, but I’m not sure they’ve cracked the nut yet.
For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it
Whether this is true depends on whether the web app is a client for your service or the client for your service. The big advantage of the split architecture is that it gives you a UI-agnostic web service where your web app is a single front end for that service.
If you never anticipate needing to provide any non-web clients to your service then this abstraction has a cost but little benefit. If you are a small team with short timelines that doesn’t need other clients for the service yet then it is cost now for benefit later, where the cost may end up being larger than the cost of refactoring to add abstractions later once the design is more stable.
(2) avoid splitting app logic in half/duplicating app logic.
The other thing is to avoid duplicating application state. I’m also a thoroughly a backend guy, but I’m led to understand that the difficulty of maintaining client-side application state was what led to the huge proliferation of SPA frameworks. But maintaining server-side application state is easy, and if you’re doing a pure server-side app, you expose state to the client through hypertext (HATEOAS). What these low-JS frameworks do is let you keep that principle — that the server state is always delivered to the client as hypertext — while providing more interactivity than a traditional server-side app.
(I agree that there are use-cases where a more thoroughly client-side implementation is needed, like games or graphics editors, or what have you.)
Well, there’s a difference between controller-level validation and model-level validation. One is about not fucking up by sending invalid data, the other is about not fucking up by receiving invalid data. Both are important.
this turns out to be tens (sometimes hundreds!) of requests because the general API is very normalized (yes we were discussing GraphQL at this point)
There’s nothing about REST I’ve ever heard of that says that resources have to be represented as separate, highly normalized SQL records, just as GraphQL is not uniquely qualified to stitch together multiple database records into the same JSON objects. GraphQL is great at other things like allowing clients to cherry-pick a single query that returns a lot of data, but even that requires that the resolver be optimized so that it doesn’t have to join or query tables for data that wasn’t requested.
The conclusion, which can be summed up as, “Shell art is over,” is an overgeneralized aesthetic statement that doesn’t follow from the premises. Even if the trade-offs between design choices were weighed fully (which they weren’t), a fundamentally flawed implementation of one makes it a straw man argument.
Safari will notice when a page in the background is hogging the CPU, and either throttle or pause it after a while. It puts up a modal dialog on the tab telling you and letting you resume it. Hopefully it sends an email to the developer too (ha!)
It wound spin up on load, not in the background, because loading all that JS and initializing the page is what would cause the CPU usage. And then after I closed the page, 20 seconds later someone else would send me another Twitter link and I’d get to hear the jet engine again.
In the olden days we used to have something called Progressive Enhancement, which essentially meant you would render HTML on the server side and then add some JavaScript on the client to spice it up a bit; this gave us things like autocompletion in search bars, infinite scroll etc.
This was fine for applications that required a modicum of interactivity, but as people started to build things that resembled desktop applications - so called “Single Page Applications” - they started to run into trouble. The DOM is not a GUI toolkit and it showed; consolidating application state and DOM state without expensive re-rendering was a hassle. When React first came out it was a boon for teams struggling to build SPAs; being able to build your layout declaratively while maintaining responsiveness was great - no need to keep track of DOM state, just let React do its magic.
React was so great that all of a sudden everything was supposed to be a React app, even web sites which were literally not Single Page Applications. However, React was not a one-size-fits all solution so pretty soon you run into problems with huge bundle sizes and slow load times. Eventually people realized that some of this load time could be amortized by rendering React components on the server - you could leverage data locality and even serve some pages completely from cache!
And so concepts like “hydration” and React server-side rendering framework were born, which essentially meant you would use React to render HTML on the server side and then add some JavaScript on the client to spice it up a bit.
It was around this time that I started experiencing “JS fatigue”; don’t get me wrong, if I’m building a SPA I’ll reach for React any day of the week, but if I’m building a mostly static website I’m sticking with server-side rendering and a sprinkle of JavaScript.
Ya - it’s the successor to intercooler.is which predates Hotwire/Turbolinks/etc.
I loathe doing most frontend work but when something I’m doing calls for dynamism, htmx (and formerly intercooler) are what I reach for for simple stuff.
there are many similar libraries in low-js category https://github.com/taowen/awesome-html But avoid javascript is the wrong attitude, JavaScript
is not the problem. Htmx just reinvent a dsl to do the job of JavaScript
Htmx just reinvent a dsl to do the job of JavaScript
That’s not quite the way to think about it, IMO. It’s more like there are a lot of really common interaction patterns that browsers don’t natively implement, and which have to be implemented in JavaScript. And the way browser development has actually gone, browsers don’t implement these things, but provide more and more power for building JavaScript applications. HTMX is more like a polyfill for an alternate universe where browsers let you do all of the most common AJAX patterns declaratively, as part of HTML.
It’s like date pickers. Until relatively recently, browsers did not implement a native date picker control, and they had to be implemented in JavaScript. There must be thousands of jQuery date/time picker implementations. Today, there are HTML 5 input types for date, time, and date/time, and most browsers implement them natively. You can throw away your JavaScript datepicker implementations now. From my point of view, HTMX is like this; what if HTML had absorbed the most common and useful AJAX patterns and you could now throw away your JavaScript implementations of those patterns.
One reason I prefer Alpine.js to Stimulus is that Stimulus was written by people who don’t like JavaScript, and Alpine was not. Stimulus goes out of its way to not use JS conventions, whereas Alpine is just a convenient way to write JS inline but still have it work with componentized layouts.
The lesson here sounds more like “bad protocols will make your client/server system slow and clumsy”, not “move all of your system’s code to the server.” The OP even acknowledges that GraphQL would have helped a lot. (Or alternatively something like CouchDB’s map/reduce query API.)
I don’t really get the desire to avoid doing work on the client side. Your system includes a lot of generally-quite-fast CPUs provided for free by users, and the number of these scales 1::1 with the number of users. Why not offload work onto them from your limited and costly servers? Obviously you’re already using them for rendering, but you can move a lot of app logic there too.
I’m guessing that the importance of network protocol/API design has been underappreciated by web devs. REST is great architecturally but if you use it as a cookie-cutter approach it’s non-optimal for app use. GraphQL seems a big improvement.
Yes, and if every site I’m visiting assumes that, then pretty quickly, I no longer have quite-fast CPUs to provide for free, as my laptop is slowly turning to slag due to the heat.
Um, no. How many pages are you rendering simultaneously?
I usually have over 100 tabs open at any one time, so a lot.
If your browser actually keeps all those tabs live and running, and those pages are using CPU cycles while idling in the background and the browser doesn’t throttle them, I can’t help you… ¯\_(ツ)_/¯
(Me, I use Safari.)
Yes, but assuming three monitors you likely have three, four windows open. That’s four active tabs, Chrome put the rest of them to sleep.
And even if you only use apps like the one from the article, and not the well-developed ones like the comment above suggests, it’s maybe five of them at the same time. And you’re probably not clicking frantically all over them at once.
All I know is that when my computer slows to a crawl the fix that usually works is to go through and close a bunch of Firefox tabs and windows.
There is often one specific tab which for some reason is doing background work and ends up eating a lot of resources. When I find that one tab and close it my system goes back to normal. Like @zladuric says, browsers these days don’t let inactive tabs munch resources.
My understanding is that it’s the desire to avoid some work entirely. If you chop up the processing so that the client can do part of it, that carries its own overhead. How do you feel about this list?
Building a page server-side:
Building a page client-side:
Compare the paper Scalabiilty! But at what COST!, which found that the overhead of many parallel processing systems gave them a high “Configuration to Outperform Single Thread”.
That’s an accurate list… for the first load! One attraction of doing a lot more client-side is that after the first load, the server had the same list of actions for everything you might want to do, while the client side looks more like:
(Edit: on rereading your post your summary actually covers all requests, but missed how the request and response and client-side rerender can be much smaller this way. But credit where due!)
That’s not even getting at how much easier it is to do slick transitions or to maintain application state correctly across page transitions. Client side JS state management takes a lot of crap and people claim solutions like these are simpler but… in practice many of the sites which use them have very annoying client side state weirdness because it’s actually hard to keep things in sync unless you do the full page reload. (Looking at you, GitHub.)
When I’m browsing on mobile devices I rarely spend enough time on any single site for the performance benefits of a heavy initial load to kick in.
Most of my visits are one page long - so I often end up loading heavy SPAs when a lighter, single page optimized to load fast from an un sched blank state would have served me much better.
I would acknowledge that this is possible.
But that’s almost exactly what the top comment said. People use framework of the day for a blog. Not flattening it, or remixing it or whatever.
SPAs that I use are things like Twitter, the tab is likely always there. (And on desktop i have those CPU cores.)
It’s like saying, I only ride on trains to work, and they’re always crowded, so trains are bad. Don’t use trains if your work is 10 minutes away.
But as said, I acknowledge that people are building apps where they should be building sites. And we suffer as the result.
What still irks me the most are sites with a ton of JavaScript. So it’s server-rendered, it just has a bunch of client-side JavaScript that’s unused, or loading images or ads or something.
You’re ignoring a bunch of constant factors. The amount of rendering to create a small change on the page is vastly smaller than that to render a whole new page. The most optimal approach is to send only the necessary data over the network to create an incremental change. That’s how native client/server apps work.
In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer then maybe sending a “whole new page” consisting of 200 kb of static HTML upon submitting a form would be more optimal.
This is hyperbole. Sending a ‘“whole new page” of 200 kb of static HTML’ has your userspace program block on the kernel as bytes are written into some socket buffer, NIC interrupts the OS to grab these bytes, the NIC generates packets containing the data, userspace control is then handed back to the app which waits until the OS notifies it that there’s data to read, and on and on. I can do this for anything on a non-embedded computer made in the last decade.
Going into detail for dramatic effect doesn’t engage with the original argument nor does it elucidate the situation. Client-side rendering makes you pay a one-time cost for consuming more CPU time and potentially more network bandwidth for less incremental CPU and bandwidth. That’s all. Making the tradeoff wisely is what matters. If I’m loading a huge Reddit or HN thread for example, it might make more sense to load some JS on the page and have it adaptively load comments as I scroll or request more content. I’ve fetched large threads on these sites from their APIs before and they can get as large as 3-4 MB when rendered as a static HTML page. Grab four of these threads and you’re looking at 12-16 MB. If I can pay a bit more on page load then I can end up transiting a lot less bandwidth through adaptive content fetching.
If, on the other hand, I’m viewing a small thread with a few comments, then there’s no point paying that cost. Weighing this tradeoff is key. On a mostly-text blog where you’re generating kB of content, client-side rendering is probably silly and adds more complexity, CPU, and bandwidth for little gain. If I’m viewing a Jupyter-style notebook with many plots, it probably makes more sense for me to be able to choose which pieces of content I fetch to not fetch multiple MB of content. Most cases will probably fit between these two.
Exploring the tradeoffs in this space (full React-style SPA, HTMX, full SSR) can help you come to a clean solution for your usecase.
I was talking about the additional overhead required to achieve “sending only the necessary data over the network”.
My impression is that it is largely (1) to avoid JavaScript ecosystem and/or* (2) avoid splitting app logic in half/duplicating app logic. Ultimately, your validation needs to exist on the server too because you can’t trust clients. As a rule of thumb, SSR then makes more sense when you have lower interactivity and not much more logic than validation. CSR makes sense when you have high interactivity and substantial app logic beyond validation.
But I’m a thoroughly backend guy so take everything that I say with a grain of salt.
Edit: added a
/or
. Thought about making the change right after I posted the comment, but was lazy.This is a really the core issue.
For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it. GraphQL is an attempt to cut down on how much work this is, but it’s always going to be some amount of work compared to just creating a context dictionary in your controller that you pass to the HTML renderer.
However, for a team that is big enough to have separate frontend and backend teams, using a SPA decreases the amount of communication necessary between the frontend and backend teams (especially if using GraphQL), so even though there’s more work overall, it can be done at a higher throughput since there’s less stalling during cross team communication.
There’s a problem with MPAs that they end up duplicating logic if something can be done either on the frontend or the backend (say you’ve got some element that can either be loaded upfront or dynamically, and you need templates to cover both scenarios). If the site is mostly static (a “page”) then the duplication cost might be fairly low, but if the page is mostly dynamic (an “app”), the duplication cost can be huge. The next generation of MPAs try to solve the duplication problem by using websockets to send the rendered partials over the wire as HTML, but this has the problem that you have to talk to the server to do anything, and that round trip isn’t free.
The next generation of JS frameworks are trying to reduce the amount of duplication necessary to write code that works on either the backend or the frontend, but I’m not sure they’ve cracked the nut yet.
Whether this is true depends on whether the web app is a client for your service or the client for your service. The big advantage of the split architecture is that it gives you a UI-agnostic web service where your web app is a single front end for that service.
If you never anticipate needing to provide any non-web clients to your service then this abstraction has a cost but little benefit. If you are a small team with short timelines that doesn’t need other clients for the service yet then it is cost now for benefit later, where the cost may end up being larger than the cost of refactoring to add abstractions later once the design is more stable.
If you have an app and a website as a small team, lol, why do you hate yourself?
The second client might not be an app, it may be some other service that is consuming your API.
The other thing is to avoid duplicating application state. I’m also a thoroughly a backend guy, but I’m led to understand that the difficulty of maintaining client-side application state was what led to the huge proliferation of SPA frameworks. But maintaining server-side application state is easy, and if you’re doing a pure server-side app, you expose state to the client through hypertext (HATEOAS). What these low-JS frameworks do is let you keep that principle — that the server state is always delivered to the client as hypertext — while providing more interactivity than a traditional server-side app.
(I agree that there are use-cases where a more thoroughly client-side implementation is needed, like games or graphics editors, or what have you.)
Well, there’s a difference between controller-level validation and model-level validation. One is about not fucking up by sending invalid data, the other is about not fucking up by receiving invalid data. Both are important.
Spot on.
There’s nothing about REST I’ve ever heard of that says that resources have to be represented as separate, highly normalized SQL records, just as GraphQL is not uniquely qualified to stitch together multiple database records into the same JSON objects. GraphQL is great at other things like allowing clients to cherry-pick a single query that returns a lot of data, but even that requires that the resolver be optimized so that it doesn’t have to join or query tables for data that wasn’t requested.
The conclusion, which can be summed up as, “Shell art is over,” is an overgeneralized aesthetic statement that doesn’t follow from the premises. Even if the trade-offs between design choices were weighed fully (which they weren’t), a fundamentally flawed implementation of one makes it a straw man argument.
The Twitter app used to lag like hell on my old Thinkpad T450. At the very least, it’d kick my fan into overdrive.
Yay for badly written apps :-p
Safari will notice when a page in the background is hogging the CPU, and either throttle or pause it after a while. It puts up a modal dialog on the tab telling you and letting you resume it. Hopefully it sends an email to the developer too (ha!)
It wound spin up on load, not in the background, because loading all that JS and initializing the page is what would cause the CPU usage. And then after I closed the page, 20 seconds later someone else would send me another Twitter link and I’d get to hear the jet engine again.
In the olden days we used to have something called Progressive Enhancement, which essentially meant you would render HTML on the server side and then add some JavaScript on the client to spice it up a bit; this gave us things like autocompletion in search bars, infinite scroll etc.
This was fine for applications that required a modicum of interactivity, but as people started to build things that resembled desktop applications - so called “Single Page Applications” - they started to run into trouble. The DOM is not a GUI toolkit and it showed; consolidating application state and DOM state without expensive re-rendering was a hassle. When React first came out it was a boon for teams struggling to build SPAs; being able to build your layout declaratively while maintaining responsiveness was great - no need to keep track of DOM state, just let React do its magic.
React was so great that all of a sudden everything was supposed to be a React app, even web sites which were literally not Single Page Applications. However, React was not a one-size-fits all solution so pretty soon you run into problems with huge bundle sizes and slow load times. Eventually people realized that some of this load time could be amortized by rendering React components on the server - you could leverage data locality and even serve some pages completely from cache!
And so concepts like “hydration” and React server-side rendering framework were born, which essentially meant you would use React to render HTML on the server side and then add some JavaScript on the client to spice it up a bit.
It was around this time that I started experiencing “JS fatigue”; don’t get me wrong, if I’m building a SPA I’ll reach for React any day of the week, but if I’m building a mostly static website I’m sticking with server-side rendering and a sprinkle of JavaScript.
Htmx, to me, seems like a simpler version of hotwire/turbolinks/etc, so I am interested in giving it a try soon.
Ya - it’s the successor to intercooler.is which predates Hotwire/Turbolinks/etc.
I loathe doing most frontend work but when something I’m doing calls for dynamism, htmx (and formerly intercooler) are what I reach for for simple stuff.
it’s very nice, you should try it if you can
there are many similar libraries in low-js category https://github.com/taowen/awesome-html But avoid javascript is the wrong attitude, JavaScript is not the problem. Htmx just reinvent a dsl to do the job of JavaScript
That’s not quite the way to think about it, IMO. It’s more like there are a lot of really common interaction patterns that browsers don’t natively implement, and which have to be implemented in JavaScript. And the way browser development has actually gone, browsers don’t implement these things, but provide more and more power for building JavaScript applications. HTMX is more like a polyfill for an alternate universe where browsers let you do all of the most common AJAX patterns declaratively, as part of HTML.
It’s like date pickers. Until relatively recently, browsers did not implement a native date picker control, and they had to be implemented in JavaScript. There must be thousands of jQuery date/time picker implementations. Today, there are HTML 5 input types for date, time, and date/time, and most browsers implement them natively. You can throw away your JavaScript datepicker implementations now. From my point of view, HTMX is like this; what if HTML had absorbed the most common and useful AJAX patterns and you could now throw away your JavaScript implementations of those patterns.
One reason I prefer Alpine.js to Stimulus is that Stimulus was written by people who don’t like JavaScript, and Alpine was not. Stimulus goes out of its way to not use JS conventions, whereas Alpine is just a convenient way to write JS inline but still have it work with componentized layouts.
I’ve been happy with Alpine.js.