1. 35
  1. 17

    Am I the only one that found this somewhat weird to read because it frames what many (definitely I) consider the most sensible default way to build web applications as a novel experiment?

    Relatively few user needs actually require the interactions that single page apps offer, and SPAs are, in my experience, far more costly to build and, especially, to maintain.

    1. 6

      Why is no JavaScript a sensible default? 99% of users have JavaScript enabled, including mobile.

      Sure, if you don’t need JS don’t use it. But the purist way of saying “I will NOT use JS, I will find difficult workarounds for things that would be easy in JS.” is ridiculous. It’s like designing a car with tank treads because you don’t like tires.

      1. 4

        I avoided where possible for security, efficiency, and portability. Sandboxing a renderer doing HTML and CSS is so much easier than a Turing Complete language running malicious or inefficient code. All this extra complexity in web browsers has also reduced the market to mostly two or three of them. The simpler stuff can be handled by the budget browsers. Increases diversity of codebases and number of odd platforms that can use site/service. Finally, making the stuff simpler increases efficiency as you’re handling both less capabilities and less combinations of them. Easier to optimize.

        So, the above are all the reasons I opposed the rise of JavaScript in favor of old-school DHTML where possible. Also, there were alternatives like Juice (Oberon) that were better. Worse is Better won out again, though. Now I limit the stuff mainly for security, predictability, and performance on cheap hardware.

        1. 3

          the above are all the reasons I opposed the rise of JavaScript in favor of old-school DHTML where possible

          Huh. Please correct me if I’m wrong, but my impression was that “DHTML” was a term created by Microsoft to describe websites that used HTML markup with a scripting language (like JavaScript or VBScript) to manipulate the DOM. They used it to market the capabilities of Internet Explorer.

          1. 2

            I ran into it on sites that either used JavaScript mainly to enhance but not replace presentation layer or used CSS tricks. I know nothing of it term itsslf past that.

            1. 2

              Ah, okay. I think I misunderstood what you meant by “the rise of JavaScript” - not its mere usage, but its increasing responsibilities in contemporary web development.

          2. 2

            Isn’t css turing complete by now? :)
            It is disappointing that if you make a browser from scratch as a hobby you have to add a javascript engine to be able to use the big 3 or 4 most popular social media sites.
            Soon adding an SSL library will be a requirement for most sites. For serious browsers none of this is a real issue, it’s just kind of sad that a useful browser has a much larger minimum complexity now days.

            1. 6

              For serious browsers none of this is a real issue, it’s just kind of sad that a useful browser has a much larger minimum complexity now days.

              That’s a big part of the reason I pushed for the simpler standards. Too much money and complexity going into stuff always ends up as an oligopoly. Those usually get corrupted by money at some point. So, the simpler browser engines would be easier to code up. Secure, extensible, cool browsers on language and platform combo of one’s choosing would be possible. Much diversity and competition would show up like the old days. This didn’t happen.

              An example was the Lobo browser that was done in Java. Browsers were getting hit by memory-safety bugs all the time. One in Java dodges that while benefiting from its growing ecosystem. It supported a lot, too, but missing some key features as the complexity went up over time. (sighs) Heck, even a browser company with a new, safe language is currently building a prototype for learning instead of production. Even the domain experts can’t do a real one with small teams in reasonable time at this complexity level. That’s saying something.

          3. 3

            It is profoundly easier to write useful acceptance tests for a static HTML versus a page with any amount of JavaScript. The former is a simple text-based protocol and the latter is a fractal API with a complex runtime and local state. That’s why no JS is a sensible default.

            That doesn’t mean finding difficult workarounds to avoid JS at all costs. It means being clear about the downsides and coming up with the simplest strategy to mitigate them for your application.

        2. 10

          I’ve exactly done this in my current project. There is no JS at all. It is not needed. The only reason for adding JS at this point would be to make it appear slow, because people are confused things respond so quickly to clicks and form submits. It also helps to write the CSS myself giving things a minimalist bootstrap-like look and reduce the size of images so the initial page load is only around 5kB. It is really nice to be able to use the site on GPRS mobile speeds :)

          Update: I really like the Chromium network throttling option, set it to GPRS 500ms, 50kb/s, disable caching and see how your site works for new users on throttled connections, e.g. user ran out of “fast” mobile data.

          1. 3

            The only reason for adding JS at this point would be to make it appear slow, because people are confused things respond so quickly to clicks and form submits

            :)

          2. 6

            I’ve reached recently the same conclusion, most websites don’t need javascript or very little. Newspapers, forums, video sharing, internal CRUDs. Cutting out the whole API + SPA stuff is often a lot more productive.

            SPAs are nice for fancy features (live preview, realtime,..) but often not worth the complexity they add compared to the difference they make in the end-product.

            Now most of my apps are js-free by default and then I pepper it with some light javascript when needed.

            1. 8

              This point of view never seems to take geography into account. It might work when you have 30ms latency. But, for example, I’m hosting my application for New Zealand users on Heroku, which offers either US or EU locations for deployment. So there’s a latency of 120-250 ms on every request. A roundtrip to the server for every little thing makes for a bad user experience in this situation.

              1. 5

                On the other hand, I’ve encountered plenty of SPAs that do terribly on bad networks. They tend to block/stall until a critical mass of resources have been downloaded, so you get to look at a blank page and spinner for several seconds instead of a progressively loading page. They also tend to download a silly amount of javascript (hundreds of Kb, sometimes Mb). They also get wonky if there are network hiccups and their state falls into some edge-case the programmer didn’t envision (because some APIs loaded but not others)

                Source: I live in rural, upstate NY and have slow DSL. Half the internet is unbearable to use.

                1. 1

                  Good point. There is no denying that the web is a horrible kludge of a platform, and your examples illustrate that. It’s very hard to get an application to work well. SPAs are, in a sense, a necessary evil. The “SPA platform” (for lack of a better term) wasn’t designed but evolved piece by ill-fitting piece.

                  Regarding edge cases though: I think that is more of a consequence of additional complexity present in SPAs rather than a drawback specific to them. If you move interaction with external APIs to the backend, it’s just as likely that the backend doesn’t handle all the combinations of responses properly.

                2. 1

                  Here I would just host my app next to the users (bonus point: initial load is faster too). I agree that if you have to take into account a slow network and you can’t use a CDN, then, it’s gonna be custom solutions (local caching via js mostly).

                  1. 7

                    If we constrain the requirements enough (hosted nearby, CRUD only, no real time updates) then we can of course get a class of applications which don’t benefit from an SPA implementation. I’m not at all sure that this class contains “most” applications however. Nothing I’ve worked on in the last 5 years was in this class, for example.

                    I guess I just don’t like generalisations.

                    1. 1

                      You can always host a backend-only app in multiple regions and you don’t have to restrict yourself to CRUD too.

                      And if you have to, you can always have some pages use javascript for realtime.

                3. 4

                  I find SPAs interesting, but mostly only if they go to the other extreme: only JS, no backend, to the extent that you can save the webpage offline and run it, because you have the entire app’s source code and required resources. If a backend is going to be obligatorily involved anyway, though…

                4. 4

                  As a developer I don’t mind interacting with traditional form submits, synchronous browser requests for every action, and so on… but to the users of your product, slickness counts. JS is clearly not the only way to achieve slickness, but it is definitely one way. We just replaced our old file-post method of users submitting images for a trendy JS-driven SaaS replacement and people overwhelmingly love it, precisely because it’s slick and has a fancy % indicator and all that jazz. At any rate, the idea that the glitz itself matters to your users (and product managers too, although that is a less compelling argument) is I think a perspective that the ultra-pragmatic developer mindset sometimes misses out on.

                  As a side note, writing a SPA in Clojurescript/Om has been a shockingly refreshing exercise. Our server is a set of ReST APIs, business logic/validations and a persistence layer, and the entire client-side application (from templating/html to client-side logic) is all writen in Clojurescript. It is true that there is more maintenance on the client side JS layer than there would be in a traditional Rails app for example, but I’m not sure it’s true that there’s more maintenance overall - we’ve just moved it from erb/haml with ad hoc jQuery (or equivalent) into a structured, testable application with a deliberate architecture, design idioms and so on.

                  1. 6

                    I often go the opposite route. I write my application as a JSON-RPC or XML-RPC service and then write a client for it in JavaScript. The web browser, speaking XML-RPC via XHR, is just one more consumer of that standard RPC interface. There’s no special handling on either the browser or the server side for the web interface.

                    This makes testing and automated use of the application so much easier. That being said, 95% of the stuff I’ve written like this were internal tools, so latency wasn’t an issue.

                    1. 11

                      It also makes your application absolute hell for NoScript users or people using older computers.

                      1. 6

                        Not just people with old computers. Many users with decent computers still keep open a bunch of tabs for different webapps (in my case: Gmail, a couple of Slack tabs, YouTube playing music in the background, and about 10-15 tabs with internal wikipages).

                        Until July, my laptop at work “only” had 8GB of memory, and the browser was a serious drain on memory. From time to time, I also notice some rogue javascript snippet maxing CPU & causing my fans to spin up.

                        1. 7

                          My daily driver has 2GB ram – slack is unusable (while irc clients work fine). Spotify is a drag and YouTube stutters on fullscreen (clementine and mpv are fine though).

                          The trend towards inefficient web application worries me.

                          1. 3

                            I’ve abandoned low-memory systems (<4GB) for desktop use entirely because of this trend for web applications to gobble up what I consider insane amounts of memory.

                            It’s ridiculously to think that most integrated development environments and games work fine on 2/4GB systems, but the web browsing becomes a drag.

                            1. 4

                              Resist the change! Just stop using such webapps.

                              mpv can play most web video (it’s backed by youtube-dl), there are (iirc) CLI clients for Spotify and tidal, you can use slack via XMPP/IRC bridge and many IM applications can be used via XMPP or bitlbee.

                              If you do this, even 1GB ram or less is perfectly usable, I’ve found.

                              1. 2

                                I use Quassel as a daemon on my server and connect to 3 Slack servers, 3 IRC servers and Google Hangouts (via BitlBee) using QuasselCore. I don’t have a Facebook account, and use Twitter mainly from my phone.

                                I use emacs and terminator as my two main tools, and my window manager is dwm, with minimal patches (some extra shortcuts plus a systray).

                                I think I’m pretty good at resisting “modern” software, but even when you’re Googling around for documentation, tutorials, a solution to a problem, you tend to open a lot of tabs, and some of those tabs will be memory hogs.

                          2. 1

                            If you just want to play the audio from youtube videos, is there a 3rd party web site that can do that? Is there a mode in youtube itself to say “just give me the audio”?

                          3. 4

                            Sure. Like I said, 95% of the time these apps were internal tools where we controlled the environment (or at least could say that you needed to disable NoScript for this app).

                            For example, one of these apps replaced an ancient Tcl desktop app that was just a mess of spaghetti code and difficult to deploy. Switching to this approach (a) let users immediately start using the updated system with software we knew they had already and (b) allowed programmatic access to the app data through the same curated RPC interface, whereas before programmatic access hit the DB directly.

                            The other nice thing about this approach is that you can write a native client in the language of your choice for people who can’t/won’t use the browser interface, and it will talk to the same RPC interface.

                        2. 3

                          Single page webapps, I believe, come from a very, very strange reaction to deathly slow Rails/Django/PHP backends.

                          I don’t see how that makes sense, but perhaps Ms. Dala agrees with me and this explanation is intended to feel like it has an implicit shrug?

                          If a Rails or Django server is responding too slow for page reload to feel acceptable, I don’t see how JS-heavy SPAs are a solution in the first place? Making the back-end not slow for requests in general pays more obvious dividends. That way, even if you do eventually add an SPA (or even a lone SPA-esque workflow within the site), a back-end which is capable of acceptably-fast page-reloading requests should also be capable of fast JSON responses. The SPA-or-not aspect feels like moving in an orthogonal direction.

                          The only advantage I can imagine is the point @alexkorban makes about geography, but even then, CDNs exist.

                          1. 8

                            SPAs predate Rails and Django (but not PHP), so of course they are not a reaction to the speed of those frameworks. This article from 2003 discusses SPAs, whereas Rails and Django were released in 2005. Also, remember DHTML? As far back as 1990’s, people were attempting to make pages behave dynamically.

                            Additionally, CDNs are of limited use when it comes to data that’s updating in real time. Any application that’s implemented around a dynamic map (like Google Maps) pretty much has to be an SPA. Or consider things like https://acko.net/blog/how-to-fold-a-julia-fractal/, or even YouTube or Netflix. There are so many examples. My point is, there are classes of applications on the web today which simply can’t be implemented using the good old “serve some HTML” method. Things have moved on from static web sites.

                            SPAs are not a reaction to slow backends, they are a reaction to the desire to implement more complex applications in the browser.

                            1. 4

                              My point is, there are classes of applications on the web today which simply can’t be implemented using the good old “serve some HTML” method.

                              Definitely and I was not disagreeing with that point in any categorical manner. In fact, I wasn’t addressing that at all, which is why I responded with a top-level comment and a quote from the interview, rather than joining the comment tree between you and @damdotio.

                              Instead, I was trying to imagine a situation in the context of that quote about “Rails/Django/PHP backends”. That is to say: someone choosing to build an SPA not because the user experience necessitates it (ala your examples of Netflix and so forth), but instead because they feel it is a solution to some perceived performance issue stemming from their back-end technology choices.

                              You have misunderstood the situation which I was attempting to discuss.

                              Things have moved on from static web sites.

                              Misunderstandings aside, and while I agree with the rest of your post, I feel that “moved on” is far too strong of a statement. The interview’s examples of Reddit and Craigslist ring true. It might be more accurate to say that additional categories of applications have come into existence, which are necessarily beyond the reach static sites.

                              Also, remember DHTML?

                              Also VRML, though I wish I didn’t ;)

                              1. 3

                                I should have worded my comment better to clarify that I was agreeing with you. I was also trying to highlight the real reason for the genesis and evolution of SPAs (which is not slow backends).

                                I agree with your followup comment too.

                              2. 1

                                Also, remember DHTML? As far back as 1990’s, people were attempting to make pages behave dynamically.

                                I got all my stuff on this site: http://dynamicdrive.com/

                                All kinds of fancy CSS and such that I combined with Perl scripts or native executables on backend. We called it DHTML instead of Web 2.0 but it worked. Had less problems back then, too. People also admired our work because it wasn’t what every other site did. Not anymore. ;)

                            2. 1

                              I’d love to see a concrete example of such a site. I think I’d switch my React/Angular projects over to this if at all possible, but I can’t get around how much content would sent around. Would every grid be prerendered? Every grid sort a form submit?