1. 128
  1.  

  2. 58

    Hallelujah, suffering through this at work at the moment.

    To dispense with the obvious: for any richly interactive client application, like a movie editor or a videogame or a text editor or something, you probably should write that section as a thick client backed by APIs. Possibly a wrapping HTML page with embedded data populated from the session or the controller or whatever.

    But, 99.99% of sites aren’t doing that. They’re glorified CRUD apps, with maybe dozens of simultaneous users. Hell, @pushcx, what’s the max simultaneous users we get on Lobsters? Is your site bigger than Lobsters? HN?

    The advantages to rendering stuff server-side is:

    • You put humans first, and that’s always what ends up mattering.
    • You simplify (vastly) the distributed-systems problems inherent to web development. If it’s all server-side, the browser looks more and more like a document reader than a peered node with autonomy and hopes and dreams and the likelihood of fucking up.
    • You make caching orders of magnitude simpler, because again it’s just documents.
    • You make the asset pipeline orders of magnitude simpler, because again it’s just documents.
    • You make bugs easier because, again it’s just documents (sensing a pattern yet?).
    • It makes it slightly easier to search, get indexed (at least at one time, SPAs getting crawled properly was A Whole Thing compare to server-side rendering), and to become emails someday (though that’s also a disaster because of the miserable state of CSS in email).
    • Lots of client-side JS takes forever to compile compared with server evaluated stuff, for whatever reason. Phoenix reliably updates near instantly while the React stuff with bells and whistles and Typescript and all the other stuff chugs on my machine.
    • It encourages your engineers to do full-stack work and be able to debug things anywhere in the pipeline, because there’s not an arbitrary (and often fictional) separation in responsibilities.

    There are some downsides, sure:

    • There will be a tipping point on some pages as your designers opt for things like typeahead and sorting and whatnot where you’ll probably benefit from graduating to a React app or something.
    • There will occasionally be pathological bugs that really would be easier to hack around in JS.
    • You don’t get to blog about it as much.
    • If you really are roflscale (which usually you aren’t, let’s be honest with ourselves), you need to sort out your caching. Cloudflare and other CDNs do make this a lot easier, but even basic tweaking of nginx or haproxy or whatever your favorite solution is can get you remarkably far.
    • If you have a lot of archived content, some chucklefuck is going to crawl it, and that’s gonna make your server sad again unless you have caching.

    Overall, though, I think that starting with server-side rendering just makes so much more sense.

    1. 21

      Hell, @pushcx, what’s the max simultaneous users we get on Lobsters?

      We don’t really track this, but a quick cat production.log | grep " Request " | grep -o "2020-04-28T[0-9][0-9]:[0-9][0-9]" | uniq -c | sort -rn | head says

          472 2020-04-28T14:00
          459 2020-04-28T10:00
          388 2020-04-28T07:00
          382 2020-04-28T15:30
          372 2020-04-28T18:00
          367 2020-04-28T17:30
          367 2020-04-28T13:00
          358 2020-04-28T16:00
          355 2020-04-28T17:00
          348 2020-04-28T00:00
      

      So it looks like when bots fire and hit us on the hour and half-hour we peak ~7.9 app requests per second (does not including js/css/avatars/404s that nginx handles). Interesting finding. We’ve been briefly knocked offline by a careless bot that repeatedly requested an expensive page as fast as it could (IIRC ~19 rps). I assumed there were more bots that were slightly better behaved and this analysis makes them quickly conspicuous. 77% of the top 100 minutes today end in 0 and 5 and they strongly dominate the top of the list, rather than the 20% and even distribution you’d see if traffic were randomly distributed as humans likely are.

      As usual, happy to run variations if folks have them.

      1. 18

        It’s just not about number of users though, it’s also about type of users. The people who uses hacker news or lobste.rs is VERY different from the average user. For instance: this thing I’m using right now has markdown and expects me to be able to write it in plain text format. It has no wysiwyg editor, it doesn’t keep drafts of my comment, if I reload this page, this comment is gone, I need to refresh the page to see new updates, I get no notification if someone answer me here.

        Yes we can live without these things, but a huge part of the public actually wants functionality like these. And you keep adding them, and even simple “Crud” apps become complicated systems with modern UIs.

        The bar for UI is way higher nowadays, and your competitor will have all these features you don’t have.

        Like… I use VIM, but if I try to say to my parents to use this instead of Word they will laugh at me, it’s the same reason why UI is getting so complicated.

        We can’t just see from the performance/back-end focused point of view.

        1. 25

          The bar for UI is way higher nowadays, and your competitor will have all these features you don’t have.

          My competitor is just as likely as not to be sputtering around while their devs slapfight over whether to use React hooks or contexts or redux. You can move fast with a server-side rendered app.

          Some sites who have famously been gutted by the competitors for having boring server-side pages:

          • Craigslist
          • Hacker News
          • Stackoverflow
          • Wikipedia
          • 4chan

          In my experience if people get useful value from your site they won’t be turned off by rough design.

          1. 7

            In my experience if people get useful value from your site they won’t be turned off by rough design.

            On the contrary; “rough design” is actually a very positive feature.

            1. 5

              I think lots of users appreciate the simplicity of Wikipedia, Craigslist, etc. I know my wife preferred Craigslist to Facebook marketplace or Zillow when we were searching for an apartment earlier this year. She’s only 29.

          2. 12

            These are all really trivial things to add on top of a static or server rendered page, though. I mean, we’re literally talking about 100 lines of mithril (ok, maybe slightly more), hitting the backend.

            I recently noticed that Reddit has added a “continue this thread” link to many threads. Because they’re doing it through their frontend app, not only does it not just load the comment, it redirects you, and then when you try to go back to where you were, the basic back functionality of the browser is broken. For a comment that is multiple levels deep, you’re talking about what looks to the user like 3 or 4 new page reloads, but is really just whatever their frontend is doing. It’s baffling to me, especially considering that they were so ahead of the game in 2005 that they were doing stuff in lisp, and Huffman is the CEO.

            So this frontend, which ideally is better for the user, to me is almost unusable. I checked out what kind of requests they are doing, and it’s over 200 and the page isn’t even loaded in 30 seconds on a good desktop. The requests dont’ actually ever stop, there is a “beacon” post request every x seconds that looks like it is hitting ads. I can only assume they’re forcing all these redirects on the user because it allows more ad impressions.

            Just feels like we’ve really lost our way. I don’t understand how Huffman is even okay with that.

        2. 32

          There’s a reason reddit can’t seem to kill old.reddit.com - it’s a better version of the site imho.

          1. 19

            “Better”, if anything, wildly understates the degree to which the new reddit is an unusable hash. Like some sort of greatest hits list of all the terrible ideas professional web nerds have had since reddit’s original interface was designed.

            1. 5

              Every now and then I bounce back to it to see if it’s any good, and it seems to get worse. It is borderline unusable.

          2. 18

            For everyone who is not familiar with this term, because it is quite confusing: “Server-Side Rendering” doesn’t mean that any rendering is done on the server. Not a single pixel is drawn on the server. It just means that the server runs some code and generates HTML which is then sent to the client - the way the web used to work, and, in my eyes, should work - as opposed to sending an empty HTML skeleton and a bunch of JavaScript that fetches additional stuff dynamically. The rendering itself, as in, drawing pixels, always happens on the client, so it’s not like Google Stadia.

            1. 16

              A couple things that SSR don’t solve well/at all, by itself, in CRUD apps:

              • a data picker where the source data is so large that it is infeasible to send just as a raw select (for example “please pick a file in your dropbox container”). These end up making you require to build a custom component with API polling
              • Showing/hiding fields depending on the validity of previous fields
              • validation of data without a page flash
              • variable amounts of input fields (line items on a document, you usually want to be able to add a line item without a HTML round trip)

              HTML roundtrips are super expensive compared to client-side rendering if you’re doing them a lot! HTML is a shitty data exchange format. Plus you’re reflowing the entire page, frontloading server-side costs…

              So all of this leads to “SSR with sprinkle of JS” perhaps? Except then you get into the problem of the hybrid approach: it’s easy to do 0% JS, but when you need a bit you are entering into a very brittle thing where you’re juggling two machines at once.

              I feel like people don’t really seem to remember the spaghetti-code of a bunch of jquery handlers very well. People who write “super slow” client side rendering stuff… they would probably write super slow server-side rendering code as well! The dynamics are mostly the same, except CSR, done right, is friendlier to the client than SSR in non-trivial flows (which still exist in CRUD apps! Salesforce’s entire existence is based on this fact)

              I really like SSR-only appls for stuff like lobsters, cuz there’s like a very small amount of actions. But I will die on the “well-done client-side rendering is almost always better than well-done SSR” hill.

              Imagine if every time you changed channels in your IRC client, you redownloaded a bunch of chat logs and settings despite you having them locally! That would sound silly, because redownloading a client over and over again is almost always silly. Yet people make similar arguments about SSR and performance and it boggles my mind. There’s nuance and loads of valid use cases for SSR of course

              1. 6

                So all of this leads to “SSR with sprinkle of JS” perhaps? Except then you get into the problem of the hybrid approach: it’s easy to do 0% JS, but when you need a bit you are entering into a very brittle thing where you’re juggling two machines at once.

                It’s exactly this that makes things difficult. You end up with a mess easily, the more complex the application gets, the more special cases you need for ui elements to be able to update.

                At work, we write lots of super complex apps that make a lot of sense as SPA. When we do need something simpler, standardization of tooling is more important than a special snow flake treatment of the most perfect thing for the app at that point in time (because you’d end up with either a mess or a complete rewrite if a future version of the app turns out to be of high complexity).

                I don’t like it much. I hate browser scripting with a passion, and everything around it; the JS ecosystem, the ads, the spying scripts, the wasteful CPU and memory usage, the breaking of the back button, bookmarks and other “standard” browser features that worked fine with the document-oriented model of the original web. But it looks like these are (currently) the economics of writing applications efficiently.

              2. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                1. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                  1. 3

                    Even better, we should stop making arguments out of authority and stop talking about universal truths in Tech and in dev practices. God doesn’t write code, so there are just opinions, not truths. Framing you own opinion as truth is just a good way to enter an echo-chamber of like-minded machos and kill any dialogue.

                    1. 2

                      “Code proven to function correctly is better than equivalently fast and well-written code not proven to function correctly”. You want to tell me this isn’t true but just an opinion?

                      1. 5

                        The definitions of “functionally correct”, “proven” and “better” are all relatives to a context or a system of reference (be it a value system or a formal system). They are not absolutes. Not everybody evaluates that statement in the same context and in the same system and it might lead to people considering that statement imprecise, false or meaningless.

                        A lot of code “not proven to function correctly” can easily generate more profit than a correct code, if the malfunction doesn’t hit the user too hard and the cost of being proven correct is much bigger than the (monetary) losses. Most CEOs that want to turn a quick profit and sell their startups would deem that statement incorrect, because they have a very different definition of “better”, often incompatible with the one of workers.

                        1. 2

                          Happily! Techniques like cleanroom get can you very high confidence in correctness for a lot cheaper than proving the code correct. Most people would trade “absolute certainty” for “near absolute certainty at 10% the total cost”

                          1. 2

                            I thought it was clear that the example would have the same total costs to make the comparison equivalent and have meaning, guess not

                            1. 1

                              What is a good source to start learning about cleanroom techniques?

                              1. 3

                                Stavely’s page is best for a quick intro. His book was good. I also think his section backing semi-formal verification is the best I’ve seen on justifying its cost-benefit ratio.

                                One other thing to note. There’s automated tools now that can handle some of the techniques. Anyone trying Cleanroom might want to build on it by using one or more of those tools.

                                1. 2

                                  I ordered a copy!

                        2. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                          1. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                            1. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                              1. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                                1. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                                  1. [Comment removed by moderator pushcx: Removing off-topic political thread.]

                      2. 9

                        I don’t have an actual opinion about the topic. I’ve only worked on extremely complicated apps that had functionality where an SPA had clear benefits and tiny personal projects where anything goes.

                        What I want is a quantification of what the author means by “many”. Do any developers deny that some applications serving dynamic content could work better with server side rendering? Do any developers deny that some work better with an SPA? If the answers are no, is it possible we’re just arguing about Rorschach tests?

                        A nice experiment would be to take your browsing history for a day and categorize the pages based on how their frontends were structured. Bucket them into server side vs. SPA, then subdivide based on your perception of whether any of them seemed like[0] they’d be better off in other categories.

                        I suspect that in 2020 you’d get more JS centric apps that should’ve been server rendered than vice versa. But that’s not the interesting question. The interesting question is how much disagreement would readers have? And why? Is it because they’re making different assumptions about usage? Or do they actually have disagreements about the best technical solutions, given the data about what the pages are doing?

                        [0] I say seemed like because it’s quite hard to tell without seeing the full constraints the authors faced. Maybe you visit a site once a day, and server side rendering would be much better for you, but there’s a long tail of heavy users, and an SPA is beneficial for them.

                        1. 15

                          I respectfully disagree.

                          You will need a backend anyway. It’ll need to expose data to users. Exposing it as HTML is no harder than doing it via JSON or GraphQL. The data to HTML transformation is still required with client-side rendering, so client-side saves no work, but imposes its own costs.

                          Doing it this way means that you don’t have an API, meaning your service is neither discoverable nor scriptable. For smaller projects that may not matter, but chances are eventually you’re going to want to provide remote machine-accessible access to your service and then you’re going to need to build an API. Why not build the API first?

                          Client-side apps are wads of Javascript that must be loaded and parsed before it can load and parse JSON which it can then transformed into HTML. Precisely nothing about this is optimal for browsers and networks. Many applications are used infrequently and browser caches are not magic (and disagree with you about how important you app is), therefore arguing ‘each user only has to load it once’ is bullshit.

                          Unless your app is something used frequently and for a decent period, with lots of low-latency interaction (forms do not count: think editing, iterating), it will certainly be faster server-side rendered.

                          Client-side rendering offloads a lot of the work to the client, assuming caches are held well. An entire HTML page must be cached as a single unit. If you re-render the page every time something changes, caches become useless. If you send the rendering machinery (the templates, the JavaScript libraries, etc) once, you only need to send the changed data from there on out because the static assets are cached. Browser caches aren’t magic, sure, but they’re not useless either. They’re a well-understood technology.

                          It is harmful for read-only, or read-mostly applications. Harmful to the implementors as it imposes unnecessary cost, harmful to users as it’s likely slower, less likely to use the web platform correctly and less accessible. Inappropriate use of client-side rendering is why to find out my electricity bill I have to wait ~10 seconds while a big React app loads and then hits up 5 REST endpoints.

                          Those 5 REST endpoints are now machine-accessible, which facilitates testing and scriptable access. It’s not less likely to use the web platform correctly unless we get very pedantic, because without JavaScript frameworks on the client side we’re going to have to deal with rendering engine quirks on the server side. That big React app downloads once, and then you’re only grabbing the parts that change, and smooths over whether the client is IE or Edge or Firefox or Opera or Chrome or whatever.

                          For very small, very infrequently used, very simple applications, server side rendering makes sense. In just about all other cases, having the web browser just be another client that hits a standard API is strongly preferable, IMHO.

                          1. 27

                            Why not build the API first?

                            To me, this begs another question. Are you serving humans or computers? If you’re writing a website for humans to consume, I think it makes more sense to write their interface first and worry about the machine interface second.

                            1. 5

                              I would argue that you’re serving clients, which may be human or machine. There’s a translator for humans (the browser). It’s easier, IMHO, to build an API first and adapt it to browsers than to discover later that you need an API and retrofit it in.

                              1. 12

                                A more interesting question: what is the likelihood of that being needed? I’d hazard that 99.99% of companies out there have little or no need to expose a public API.

                                1. 1

                                  I actually agree with you and @friendlysock on this. Security, reliability, maintainability, accessibility… many benefits from defaulting on HTML documents and server-side rendering.

                                  That said, there’s one example worth considering that backed @lorddimwit’s point in an epic way.

                              2. 3

                                You are serving computers. Those computers will run a web browser, which is serving humans. Having a clean separation between the presentation layer and the business logic layer makes it easy to support multiple presentation layers, such as mobile apps, adapted interfaces for braille readers, and so on. Having this separation makes it easier to change the human interface quickly, which is essential when you’re developing something usable for humans.

                                The question is whether you put all of the presentation logic in the web browser or split it between the browser and a back-end pass. In both cases, you’re providing some tree-structured data to the web browser, which is then running a load of code to generate a UI. The only difference is whether you are providing tree-structured data to the web browser that is specifically tied to that specific UI or not. Where it makes the sense to put the split is a software engineering decision that varies between system.

                              3. 20

                                Doing it this way means that you don’t have an API, meaning your service is neither discoverable nor scriptable. For smaller projects that may not matter, but chances are eventually you’re going to want to provide remote machine-accessible access to your service and then you’re going to need to build an API. Why not build the API first?

                                This is a very good argument for splitting your app into “API” and “rendering” halves. If the whole app runs entirely through the JSON API, then you know that the JSON API is usable, because it’s being used.

                                This is not really an argument in favour of browser-side JavaScript; it’s an argument in favour of a microservice architecture where HTML rendering and business logic are split into two services, like I’m pretty sure is what GitHub’s doing.

                                Client-side rendering offloads a lot of the work to the client, assuming caches are held well. An entire HTML page must be cached as a single unit. If you re-render the page every time something changes, caches become useless. If you send the rendering machinery (the templates, the JavaScript libraries, etc) once, you only need to send the changed data from there on out because the static assets are cached. Browser caches aren’t magic, sure, but they’re not useless either. They’re a well-understood technology.

                                It’s also a very hard performance gain to actually realize. If your API is made up of orthogonal resources that all go into a single page, then rendering a single page is going to result in multiple round-trips, which will cancel out all of your efficiency gains just from the latency. (this isn’t as big of a problem in microservice case, since the two services are probably in the same data center, and even if they’re not, they’re certainly not on LTE or anything)

                                There are workarounds, of course. Smart people, like the discourse.org team, will ship the JSON payload bundled into the initial HTML payload, so the interface isn’t so chatty any more. But now you’re not just serving a static HTML file any more; you’ve built a thin server-side “rendering” tier whose job it is to predict what the JavaScript is going to need before it asks for it. That logic is now duplicated into two spots: once in the app itself, and once in the “predictor” renderer.

                                You can also try to design your API to reduce round-trips, chipping into its orthogonality, or you can use a generic query system (like GraphQL), making the API harder to cache and requiring complicated policy tooling to avoid becoming a DoS vector. But both of these options still require two round trips at least: one to load the JavaScript app, and one to make the API call.

                                As for caching server-side rendered pages in pieces: there are “edge logic” solutions like CloudFlare Workers or Server-Side Includes that can solve your Big HTML Blob caching problem. The CloudFlare worker makes multiple HTTP requests on behalf of the user, caches the pieces, but serves a single HTML blob at the other end.

                                1. 5

                                  The thing is, the two round trips you’re describing for Graphql, keep being the single only 2 round trips for a huge amount of functionality that now you can easily code into the front-end with decent modern tools, while you would need to divide them into multiple pages in a server side rendering system.

                                  The technical level of UX you can get with a modern framework like react do not compare to what you can do easily with the old school templating languages of the past. And the github example you’ve used can be used as a great counter argument, because they have 1- a TERRIBLE uptime, their status page look like a pez dispenser and 2- low level of interactivity to modern standards.

                                  1. 9

                                    The thing is, the two round trips you’re describing for Graphql, keep being the single only 2 round trips for a huge amount of functionality that now you can easily code into the front-end with decent modern tools, while you would need to divide them into multiple pages in a server side rendering system.

                                    Initial load time is your first impression, and first impressions matter.

                                    But let’s instead talk about Discourse, my favourite SPA by far. That app is complicated. They use a setup with multiple bundles to be able to render the page without downloading all of the app’s JavaScript (look for all the -bundle.js files). They hack around with OS-specific workarounds to avoid spending minutes at a time on rendering or actual OS-specific bugs. They reimplement parts of their underlying framework (Ember) for performance reasons. Because that’s what you have to do if you want a client-side JavaScript app to run well on a potato phone.

                                    I’m not even saying that they made a bad choice. Infinite scrolling is kind of an unavoidable part of their app’s design goal of letting people just read with as few impediments as physically possible. But it’s hacky, complicated, and it the initial load time is still a lot worse than a site like Lobsters.

                                    The technical level of UX you can get with a modern framework like react do not compare to what you can do easily with the old school templating languages of the past.

                                    What you’re saying is true. It also doesn’t really matter most of the time; it’s not that hard to add React components to an otherwise client-side-rendered app. It’s what Medium, for example, does.

                                    And the github example you’ve used can be used as a great counter argument, because they have 1- a TERRIBLE uptime, their status page look like a pez dispenser and 2- low level of interactivity to modern standards.

                                    So? It’s not their UI layer that’s having uptime problems. It’s GitHub Actions, Webhooks, and their MySQL cluster that dominate their downtime retrospectives. Distributed databases are an unsolved problem.

                                    GitLab uses client-side rendering, and their problems are almost identical. Maintenance work on their database (they use PostgreSQL), downtime in GitLab CI, and trouble delivering webhooks.

                                  2. 4

                                    This is not really an argument in favour of browser-side JavaScript; it’s an argument in favour of a microservice architecture

                                    You don’t need microservices if you have a proper module system.

                                    1. 1

                                      I’ve heard advice to use modules instead of microservices before, but never seen an example of it. So here’s my attempt at working out an example, which might also help anyone who isn’t clear on what that advice is supposed to mean.

                                      Given an API defined like this (in JavaScript-like pseudocode):

                                      api/products.js
                                      function getProduct(id) {
                                          // talk to database, return JSON
                                          // might throw error if database fails or there is a bug in this code
                                      }
                                      
                                      defineEndpoint('product/:id', getProduct)
                                      

                                      The suggestion is to turn this HTML rendering code, which uses a call over the local network to load the API response:

                                      networking/index.js
                                      function loadApiEndpoint(endpoint) {
                                          return fetch('http://localhost:5678/' + endpoint)
                                      }
                                      
                                      rendering/products.js
                                      import {loadApiEndpoint} from '../networking'
                                      
                                      async function renderProductPage(id)
                                          try {
                                              const productInfo = await loadApiEndpoint(`products/${id}`)
                                              return `<h1>${productInfo.name}</h1>`
                                          } catch {
                                              console.error('network error, database failure, or bug in API code')
                                          }
                                      }
                                      

                                      Into this code, which uses the language module system to find the relevant API code and get its reponse:

                                      rendering/products.js
                                      import {getProduct} from '../api/products'
                                      
                                      async function renderProductPage(id)
                                          try {
                                              const productInfo = await getProduct(id)
                                              return `<h1>${productInfo.name}</h1>`
                                          } catch {
                                              console.error('database failure, or bug in API code')
                                          }
                                      }
                                      

                                      Is that what you meant by your advice?

                                      1. 3

                                        A proper module system is one that allows you to trust that there is isolation between components. This lets each team write their own module without breaking each other.

                                        I don’t think one exists yet; it would need to segregate / apply quotas to memory allocations, network traffic and cpu use, so that teams could not break each others stuff.

                                  3. 15

                                    Most apps don’t need an API, or if they do, it’s easy enough to just add a REST API in most cases. It’s not like you can use the REST API for your frontend app anyway if you want decent performance. GraphQL is kinda designed to solve these problems, but that comes with its own set of trade-offs and problems.

                                    For very small, very infrequently used, very simple applications, server side rendering makes sense. In just about all other cases, having the web browser just be another client that hits a standard API is strongly preferable, IMHO.

                                    The thing is that the browser isn’t really “just another client”; programming a script to count my contributions on GitHub is something very different than building a functional and snappy UI, and has rather different requirements.

                                    “Write an API once, use it everywhere” is one of those things that sounds fantastic, but I’ve never seen it work well anywhere. You can often smell apps that are built on these kind of APIs, because you’re waiting forever for data to load (Stripe, SendGrid).

                                    It’s not like I don’t see the value in frontend apps btw, but there are pros and cons to both approaches, and when done well both work well. One of the downsides of frontend apps is that it’s quite hard to do them well, which tends to be easier on the backend.

                                    Relegating all backend apps to just “”very small, very infrequently used, very simple applications” seems rather lacking in nuance. Clearly there are many backend apps that work quite well.

                                    tl;dr: there is no silver bullet.


                                    Some other tidbits:

                                    If you re-render the page every time something changes, caches become useless

                                    Most “server side apps” have some dynamic content. E.g. if you click a button somewhere it loads the (server-generated) partial and replaces/inserts the content. An example is Lobsters where if you click “post” it just sends a XHR request and inserts your comment, instead of doing a full page refresh. “Pure” server-side apps are pretty rare these days.

                                    You can also use ESI to cache parts of a page; e.g. Varnish can do that.

                                    Inappropriate use of client-side rendering is why to find out my electricity bill I have to wait ~10 seconds while a big React app loads and then hits up 5 REST endpoints. Those 5 REST endpoints are now machine-accessible, which facilitates testing and scriptable access.

                                    I still have to wait 10 seconds for a simple operation. It’s not good UX, no matter what the reasons. I think this is a classic example of an engineer pointing out there are good technical reasons for something, while ignoring that users are gnashing their teeth and dreading your app because it’s so horribly frustratingly slow to use. That’s not solving problems, it’s creating them.

                                    I’ll bet that these endpoints are undocumented and not intended to be used directly by you, so the usefulness of them as an “API” is rather limited.

                                    It’s not like you can’t test server-side apps; if anything it’s easier.

                                    without JavaScript frameworks on the client side we’re going to have to deal with rendering engine quirks on the server side.

                                    It’s pretty rare I run in to this; and if I do, they’re small visual issues. “The browser” is a huge unknown platform influenced by many factors (OS, settings, extensions, etc.) which are hard to reason about. A weird bug in your backend can be hard to solve; a weird bug on the frontend can be almost impossible to solve if you’re having trouble reproducing it on your machine (which can be hard!)

                                    1. 2

                                      GraphQL is kinda designed to solve these problems, but that comes with its own set of trade-offs and problems.

                                      Can you say a few words about the problems/trade-offs you see with GraphQL? When would you use it over a RESTful API?

                                      1. 2

                                        The advantage of GraphQL is that it’s essentially an interface to your database allowing people to get a lot of data in one go. The disadvantage of GraphSQL is that it’s essentially an interface to your database :-)

                                        You really don’t want to allow people to query unlimited amount of data, so you need to be careful in what you allow and disallow, leading to all sorts of complexity. GraphQL is pretty neat for certain cases, especially large APIs (e.g. GitHub), but for a lot of cases just adding a few JSON endpoints which combine data (e.g. “get customer and tickets”) works just as well.

                                        Disclaimer: I’m not an expert on GraphQL; I only used it once. There is probably much more to be said about this.

                                    2. 1

                                      If you’re using a good server-side framework/toolkit/library/box-of-code/whatever-name-you-like, a lot of the same routes and logic that present a HTML formatted view, can return a JSON formatted view, or an XML formatted view, or whatever other serialisation tech you prefer, using literally the same business logic, and maybe a little bit of per-serialiser code.

                                    3. 7

                                      On a positive note, I am continually encouraged by the fact that more and more people have started to realize the problems with modern web pages. I think a counter movement is really beginning to grow, which will have positive effects on the evolution of web development over the next years.

                                      1. 5

                                        I’d say SSR is cheaper to implement in cases where you either have static content, or it’s reasonable to reload the entire page on user input. In most other situations doing SSR often leads to a lot of complexity because you end up having to synchronize state between the client and the server. My experience is that for any web apps that have non-trivial UIs that have high levels of interactivity, it’s much easier to manage the state on the client using SPA style. This also results in a number of other advantages. You get a clear separation between the frontend and backend out of the box. The server will have an API that you could write native mobile clients for. And the server can be kept stateless allowing for horizontal scaling.

                                        I think the problem is that a lot of tech stacks make working with SPAs awkward and people tend to discard the whole idea due to inadequate tooling.

                                        1. 5

                                          I don’t know if this is really so controversial; I know many people who would agree on the broad outline of this, including frontend engineers. “Frontend vs. backend” is a bit of a false dilemma anyway IMHO, since quite a few apps combine the best of both; generally speaking, those tend to be the apps that work the best.

                                          1. 7

                                            This is a half-truth; Almost every web application should be both. The fact that this is currently too much work is a huge indictment of current development tools.

                                            1. 1

                                              Have you seen Phoenix LiveView? I did not yet have the opportunity to use it

                                              1. 1

                                                No I’ve never heard of it. I’ve heard of Elixir though, not really something that interests me.

                                              2. 1

                                                Right now I use both for my current project. The server side is Jinja2 and the client side is handled by some Vue.js. So far, it has been a great mix because the app mixes “static” content and dashboards with multiple graphics that are updated every minute (each graphic has its own API endpoint).

                                              3. 3

                                                In general I’d agree: a vast number of projects look like Django apps with boring templated views, and a surprising proportion of the “heavy interaction” stuff turns out to be just “add a JSON handler to look up postcodes” and so on.

                                                Third party APIs? YAGNI (yet). And when you do, you understand the problem domain better and your underlying schema is pretty well sorted out and so it’s easier to do.

                                                For my own slightly bizarre take on server-driven sites without templates: https://nick.zoic.org/art/zombie-remote-control-of-the-dom/ I’ve actually now got a project in mind this would be good for, so it might see daylight some day after all.

                                                1. 1

                                                  It would be nice to add a slim workaround when JS is disabled (since backend is already doing everything).

                                                  1. 1

                                                    Yeah, maybe. Without JS, the only way to submit anything would be to make every interactable thing a form of its own … maybe doable as a zombie handler I suppose. I’ll have a think about it when I return to the project …

                                                    1. 2

                                                      If you don’t need Internet Explorer support, you can use <iframe srcdoc="escaped html form"></iframe>.

                                                      Then, form submission only updates the iframe content (instead of the whole page).

                                                      You can do this to (eg) render upvote/downvote buttons without javascript (so that hitting ‘upvote’ doesn’t refresh the page).

                                                2. 2

                                                  I agree with the article, however another advantage of the SPA model nowadays, it that it makes it easier to hire a front-end developer.

                                                  I don’t think there are as many front-end developers comfortable with working on a server-rendered app, whereas there are swats of (especially junior) front-end developers proficient with React and JavaScript.

                                                  1. 2

                                                    God bless the author!

                                                    1. 1

                                                      Perhaps if author summarized his position as:

                                                      • if you do not need a mobile app
                                                      • if you do not need a complex interaction (eg AV editor)

                                                      then use server side rendering. I think this would be a more agreeable argument (although with still lots of caveats)

                                                      Given that ‘Thiel truth’ is defined as “What important truth do very few people agree with you on?”

                                                      I am wondering if the below assertion sort of ‘shifts the goal posts’ on what we ‘truth’ we are examining.

                                                      ” People don’t want to install your app Many important, profitable, applications aren’t used enough to make a native mobile app worthwhile. Most online shops, banking, festival ticketing, government forms, etc. Therefore you will not have to support both server-side rendering and an API for your native apps “

                                                      Meaning that, the author is switching the assertion to be examined from from:

                                                      server side rendering is a way to go’ to another assertion:

                                                      ‘Many important, profitable, applications aren’t used enough to make a native mobile app worthwhile.’

                                                      1. 1

                                                        Server-side rendering seems like the right way to keep things simple, reliable, and debuggable. What I’m still trying to figure out is how to properly implement page updates. For example, there is an old CI system we use, that uses server-side rendering only (almost no JS involved). There are some pages that contain long lists of jobs being built, however, these pages are only updated on Ctrl+R :( I’m wondering what would be the best approach to have auto-update feature added to this website (I have the access to it’s source code and could deploy it easily)?

                                                        1. 1

                                                          The ultimate form of server-side rendering would be something like Google Stadia that doesn’t even need HTML, JS or API calls.

                                                          Though I heard that we gonna need near-client-rendering-server would be required to solve latency issue.

                                                          1. 4

                                                            Didn’t Opera used to do that back in the flip-phone era? A server would render the website on your behalf and send you an image of the page. I can’t imagine it working too well in the current era.

                                                            1. 3

                                                              Yes! Opera mini. It worked pretty well on 2g/3g.