1. 111
    1. 39

      I kind of didn’t want to post this because complaining about JS bloat is like complaining about the weather, but damn, these numbers are really crazy.

      1. 5

        I looked at my own site. Our JS which does simple stuff like showing newsletter sign ups and hamburger menus is 76KB (unzipped). GTM adds another half a meg of tracking crap for God knows what reason. Lord knows I have resisted GTM with all of my might, but some things are outside the decision making capacity of a simple webmistress.

        1. 6

          GTM

          Blocking Google analytics and GTM in the browser make web browsing shockingly faster, in my experience.

      2. 27

        Not going to change.

        But it’s a great competitive advantage. Nowadays it’s amazing how much of a difference can couple of competent people make that e.g. instead of spending $1M/y on AWS, can setup a handful of rented baremetal servers with NixOS, keepalived, nginx/haproxy LB, etc. and then write an app that can serve lean fronted at 100x less bandwidth and CPU power with HTMX, Go/Rust/“whatever efficient & lean”.

        The thing with bloat is that it compounds very fast. You start with AWS because “simple to start”, “it can scale” and “availability”, you need to use slow VMs, now your IO is slow as well, you need to scale out horizontally, you need distributed system, queues, fancy deployement systems, kuberentes, ops team to manage it, permission system, HRs… . Your devs can only write JS and react, so every site with 3 buttons is now a 10MB SPA, needs tons of machines to run, you get slow UX, need prefetching, CDN, careful deployement methods, staging environments, and so on…

        In the meantime a small team that can just stay lean and design and support a bit more bespoke and lower level architecture and tech stack built for the task at hand, can achieve the same end user experience at a fraction of a cost and ultimately much faster.

        Stack Overflow was for a long time a poster child or how it’s done, but I’d say nowadays it’s much easier to replicate it at smaller scale from ground up and single powerful machine can handle even more traffic before needing to do any scaling horizontally (which you might do anyway just for some basic redundancy).

        1. 8

          You start with AWS because “simple to start”, “it can scale” and “availability”,

          These words can stop a lot of good work dead in their tracks. A simple, cheap and fast solution that could be developed in a few days can be killed in a stand-up meeting if a (well-intentioned) engineer argues that it can’t elastically scale horizontally and therefore if some unlikely scenario were to happen (e.g., gaining 10 million users in the next month), the solution would crumble under the load. It’s usually followed by a pivot to a much more difficult, much more complex, much more costly approach.

          1. 6

            You start with AWS because “simple to start”, “it can scale” and “availability”, you need to use slow […] and so on…

            Wait, are you Zack from devops? You might be a colleague in my department, because you’ve described our company’s tech history almost exactly. I wish things could improve, but I feel like the tradition, the culture, the code cruft, the sheer psychological inertia of the status quo have made any prospect of positive change near impossible.

            1. 6

              In my experience it’s practically impossible to change the general mindset of a tech team. The “let’s over-engineer everything” people aren’t going away, and multiple people need to fight them just to neutralize their damage.

            2. 4

              it’s much easier to replicate it at smaller scale from ground up and single powerful machine can handle even more traffic before needing to do any scaling horizontally

              Back in the day Stack Overflow published a blog post about how running SQL Server on high end PCI attached SSDs made it possible to run the whole website with very little horizontal scaling.

              Nowadays I think ordinary M.2 NVMe SSDs that people put in buying inexpensive personal computers are about that fast.

            3. 17

              To me, worse than the download size of these bundles is the actual performance of websites when JavaScript is enabled. It’s not great on desktops but it’s atrocious on mobile phones, especially on low battery. I’m not sure if it’s exactly due to the JavaScript engine or maybe the interaction of the JavaScript engine and the layout engine. Maybe it’s due to 10000 small mallocs happening on any UI interaction. I don’t know if this is fixable or we’ve already hit the limit of efficiency given the programming model exposed to websites.

              My only recourse is to disable JavaScript for casual browsing. I wish platforms would support that use case better.

              1. 13

                It’s nice in the winter because it turns my phone into a hand-warmer.

                1. 3

                  It’s definitely fixable: folks could stop bundling so much useless crap into their pages.

                  1. 5

                    But there’s not a button labelled “bundle less crap” that can be trivially pushed.

                    Every component in modern bloatware is nominally ‘used’ in that removing it from the dependency tree would break some functionality.

                    The far more difficult problem to solve is figuring out how to eradicate those code paths that don’t end up being used so that they no longer end up included in user-facing artifacts in the first place.

                    Combined with this is the fact that, for a modern business, the opportunity cost of bundling yet another component is almost $0. You’ve somehow got to find a solution to this problem that simultaneously adds almost no meaningful overhead to dev time or infrastructure costs for it to be an attractive option.

                    1. 2

                      Dead code elimination is enabled by default in Webpack and the rest of the usual build tools.

                      those code paths that don’t end up being used so that they no longer end up included in user-facing artifacts in the first place.

                      The thing is, most code paths are used by someone but maybe not you.

                      The solution to this for big apps is to split your build artifact up into many separate JS files, analogous to DLLs in a C program. That way your entry point can be very small and quick to parse, then load just the DLLs you need to draw the first screen and make it interactive. After that you can either eagerly or lazily initialize the remaining DLLs depending on performance tradeoff.

                      I work on Notion, 16mb according to this measurement. We work hard to keep our entry point module small, and load a lot of DLLs to get to that 16mb total. On a slow connection you’ll see the main document load and become interactive first, leaving the sidebar blank since it’s a lower priority and so we initialize it after the document editor. We aren’t necessarily using all 16mb of that code right away - a bunch of that is pre-fetching the DLLs for features/menus so that we’re ready to execute them as soon as you say, click on the settings button, instead of having awkward lag while we download the settings DLL after you click on settings.

                      1. 2

                        There are solutions that strip the unused functions. They need a little bit of effort, but they can result in much leaner deployments. What the frameworks and bundlers could do is integrate and present those options as the thing you do in the normal flow. Basically normalise asking why people haven’t used it rather than it being something special.

                  2. 15

                    Gitlab’s lack of a non-javascript version of their site is what pushes me to alternatives. Even GitHub works ok-ish although stuff is starting to break. You used to be able to browse a git repo on GitHub and view the contents of a file without JavaScript but that’s broken now. You can still view recent changes at least (e.g. https://github.com/mamedev/mame/commits/master) with JavaScript disabled, although eventually that will probably break too.

                    1. 1

                      If you use GitHub with JS disabled, watch out for grey loading GIFs that, though it’s not immediately obvious, have a pulsating animation. They’ll consume CPU and battery life if you don’t manually remove them with uBlock Origin or similar.

                      Other sites do this as well.

                      1. 1

                        Here’s a personal question for you specifically since you care about being able to use sites without JS: do you think you could see it as an acceptable compromise to be able to browse entirely without JS but needing some JS for writes/form submissions?

                        1. [Comment removed by author]

                        2. 15

                          What’s most of the javascript for? Core features, third party trackers, something else? I opened up the Zoom one and 1.5mb of the js was a single (unminified) A/B testing library.

                          1. 8

                            Exactly. I wouldn’t be surprised if the person at Zoom who installed that did it maybe thinking it would be temporary, then they got laid off, then everyone forgot it existed if they knew it existed at all. So it sits there like a vestigial tail.

                            In the old days (which were imperfect in their own way) before most organizations quite knew what to do with their websites, there was usually a web master or web mistress—titles which, in retrospect, did not age well. Nevertheless, I think that the role was a useful one. It has all but disappeared. While maybe some contractors and a few individuals here and there are concerned about bloat and performance for their specific responsibilities, in many cases, there is no one is looking out for the whole website full time and asking hard questions about who owns which third party code, whether it’s really needed, and for how long. And I say that as someone who has built a lot of websites and a lot of third-party JavaScript over the last two decades. As an engineer at a tech company working on core functionality, for example, I never had any say over the landing page. That was marketing’s territory. And my engineering bosses never liked it when the marketing team would commandeer engineering resources anyway, so we were pretty well siloed.

                            The culture of building and maintaining websites lacks cohesion. Ranting about JavaScript developers and the JavaScript ecosystem is counterproductive. But this naming and shaming of specific websites with specific numbers and screenshots of specific JavaScript files, though, could actually encourage people who work for these organizations to speak up and make a compelling argument to the people who hold the purse strings that there is a systemic, organizational problem with bloat.

                            1. 6

                              I haven’t looked into this yet, but a thing I absolutely despise is Google Tag Manager, which lets business people ruin your website on a whim with zero testing for performance. I suspect it’s involved in a lot of these.

                          2. 15

                            Compare it to people who really care about performance — Pornhub, 1.4 MB

                            What a twist! LOL!

                            1. 7

                              Soon, web sites apps will have their own “Minimum System Requirements” and “Recommended System Requirements” lists ;)

                              1. 6

                                “works best in chrome” is not unusual these days at all. Even for things which really should not need any fancy features at all (looking at you Oracle expenses claiming app)

                                1. 2

                                  I was comically thinking of storage, memory, cpu frequency requirements. But you’re right. What a sad truth :)

                              2. 5

                                Another reason your free software community does not & should not have an official Discord or Slack. Privacy, freedom, data silos, accessibility, & moderation aside, you’re also forcing so much software bloat.

                                1. 4

                                  Feeling proud of having used good old jquery recently

                                  1. 3

                                    So all this time I was living under impression that, for example, if the average web page size is 3 MB, then JavaScript bundle should be around 1 MB. Surely content should still take the majority, no?

                                    I find helpful to relate data sizes to something physical. For example, a page in a novel has around 400 words on it, so it contains around a kilobyte of data (or on the order of magnitude of that). The whole of Moby Dick is around a megabyte.

                                    Here, then, I doubt that the average blog post approaches two Moby Dicks in content.

                                    1. 1

                                      Lots of blog posts include a photo or three.

                                      1. 2

                                        Consider e.g. this: https://blog.frantovo.cz/c/380/ (my blog). It is not especially optimized nor minimalist (there is an image background, custom fonts, it is build on Java EE). And despite that, it has 418 kB including five blog post images. And it even works in lynx (www browser in terminal) including comment submitting.

                                        P.S. Larger photos are OK (e.g. on a photographer’s or model’s web/portfolio) or longer text – this is valuable content and reason, why the reader is visiting given website. But the issue is size of everything else – those „helper“ parts often consume more bandwidth, CPU and RAM than actual content.

                                    2. 3

                                      Contemporary web is just a bad joke. I remember 56 Kbps modems and handwritten HTML that was (just sometimes) spiced by some (also handwritten) JavaScript. Today we have several orders of magnitude faster networks and more powerful processors, disks and more RAM on both servers and clients. But the load speed and response time of web pages and simple web applications is often not better than in the early days, actually it is sometimes even worse. Of course, today, you can run in a web browser even really complex applications like games, PC emulators or 3D graphics, which is nice, sometimes useful, and which was not possible in the early days (if you do not count VRML, Java Applets or Flash).

                                      1. 1

                                        I would have loved to see the numbers for Advent of Code. While it admittedly serves a small niche of users, it’s nonetheless an example of exceptionally good web design.

                                        1. [Comment removed by author]