I don’t like the lag between clicking and something happening. Regardless of doing old school form submit - response or newfangled AJAX, there needs to be immediate feedback on the user action.
This (at least judging from the examples and the introduction) makes a mess of progressive enhancement and accessibility. We do not want other elements to become interactive; on the contrary, we want to erase from collective memory the fact that we ever used divs and anchors as buttons in the first place. There’s not a single keyboard event listener in the source code…
At least the “quick start” example on the site’s front page uses a <button>. I don’t think this is incompatible with a11y, but the examples certainly don’t do a good job of promoting good habits there.
Note that when you are using htmx, on the server side you respond with HTML, not JSON. This keeps you firmly within the original web programming model, using Hypertext As The Engine Of Application State without even needing to really understand that concept.
If you’re returning HTML, I’m assuming you’ll be returning the whole page (i.e. the way no-JS websites usually work), and to truly make the JS optional, you’d want to:
Yes, you could have the library add a header you can pick up on the server, and respond more succinctly. I was thinking about the minimal amount of markup / extra work that still maintains compatibility with no-JS.
(Also, I guess I was a bit in disbelief that the meaning of that quoted paragraph was that you’re supposed to respond with HTML fragments, but the other examples make clear that this is in fact the intention)
X-Requested-With has been a defacto standard for JS libs that wrap XHR for years, and a bunch of backend frameworks already support it (i.e. just rendering the view specific to an action without any global layout or header+footer type templating) https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Requested-With
This approach works very well for maintaining no-JS compatibility: the header is only set by JS, so for no-js requests it’s just a regular request, and the ‘full page’ render is completed as per usual.
I think like most things (including AJAX, back in the day) “SPA” is a a buzzword first and foremost, and is overused, as all ‘hot topic’ technologies are.
The saving grace of the “AJAX era” as you call it, was that there was still a pretty reasonable percentage of developers that advocated for and understood the purpose of graceful degradation/progressive enhancement, and those things are (and were then) pretty simple to incorporate when the JS layer is just making smaller requests to the backend.
If your ‘primary’ target is a fat-JS client making JSON/etc API calls to the backend and doing its own rendering, supporting non-JS browser clients is essentially creating double the work.
What are the benefits of this compared to what’s “hot” nowadays? Would this help with perceived speed/latency/memory usage of web pages or web apps? Or is this a moot comparison?
I don’t like the lag between clicking and something happening. Regardless of doing old school form submit - response or newfangled AJAX, there needs to be immediate feedback on the user action.
This (at least judging from the examples and the introduction) makes a mess of progressive enhancement and accessibility. We do not want other elements to become interactive; on the contrary, we want to erase from collective memory the fact that we ever used divs and anchors as buttons in the first place. There’s not a single keyboard event listener in the source code…
At least the “quick start” example on the site’s front page uses a
<button>
. I don’t think this is incompatible with a11y, but the examples certainly don’t do a good job of promoting good habits there.Furthermore:
If you’re returning HTML, I’m assuming you’ll be returning the whole page (i.e. the way no-JS websites usually work), and to truly make the JS optional, you’d want to:
That being said, I’m sure the HTML-attribute-driven approach has merits (also of interest, from the bookmark archive: https://mavo.io/).
There’s no specific need to return the whole page.
Detect the request as coming via XHR, and omit the parts you don’t need.
Yes, you could have the library add a header you can pick up on the server, and respond more succinctly. I was thinking about the minimal amount of markup / extra work that still maintains compatibility with no-JS.
(Also, I guess I was a bit in disbelief that the meaning of that quoted paragraph was that you’re supposed to respond with HTML fragments, but the other examples make clear that this is in fact the intention)
X-Requested-With has been a defacto standard for JS libs that wrap XHR for years, and a bunch of backend frameworks already support it (i.e. just rendering the view specific to an action without any global layout or header+footer type templating) https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Requested-With
This approach works very well for maintaining no-JS compatibility: the header is only set by JS, so for no-js requests it’s just a regular request, and the ‘full page’ render is completed as per usual.
i asked author on hn about nojs, and they said, when possible
Huh, cool! Nice to see that intercooler is being worked on still.
I’m not sure if this approach seems novel or new to some people, but this is how we did “AJAX” apps, over a decade ago.
How do you compare the relative merits of the AJAX and SPA eras?
I think like most things (including AJAX, back in the day) “SPA” is a a buzzword first and foremost, and is overused, as all ‘hot topic’ technologies are.
The saving grace of the “AJAX era” as you call it, was that there was still a pretty reasonable percentage of developers that advocated for and understood the purpose of graceful degradation/progressive enhancement, and those things are (and were then) pretty simple to incorporate when the JS layer is just making smaller requests to the backend.
If your ‘primary’ target is a fat-JS client making JSON/etc API calls to the backend and doing its own rendering, supporting non-JS browser clients is essentially creating double the work.
Seems pretty clear web dev has gone downhill in the last decade, thanks for this retrospective.
What are the benefits of this compared to what’s “hot” nowadays? Would this help with perceived speed/latency/memory usage of web pages or web apps? Or is this a moot comparison?