1. 34
  1.  

  2. 23

    As someone who has to use the Google Cloud UI multiple times a day I was excited to see an article with a performance analysis. This UI is the only webapp I use that regularly kills chrome browser tabs. When it’s not killing tabs it’s giving me multi-second input latency or responding to actions I took several seconds ago.

    Unfortunately this analysis didn’t go into the depth I’d like. I’d love to see a more opinionated deeper analysis. For instance if I’m reading the analysis correctly it takes 150ms to load the first spinner icon. That’s actually not a great number given it should be served from the CDN. JS parsing and compilation takes 250ms and 750ms respectively. Honestly from a user perspective that’s not even that bad. If the page took 1.15s to load I’d be pretty happy. Then there’s 1s of waiting for the initial angular render. So we’re at a little over 2s.

    That’s not so bad.

    Oh wait, it’s 2s until the 2nd spinner. Things just go downhill from there.

    All in all their recommendations are pretty weak. Change the priority of a piece of content. Remove a bit of unused code. None of those things would fix the ux disaster. Google Cloud Console is like the infamous Fool’s Gold sandwich that involves a jar of peanut butter, a jar of jelly, a pound of bacon and this article is recommending they use low salt bacon.

    1. 4

      As someone else who uses the console daily, I completely agree. Navigating though the GKE workloads takes several seconds per click, even when just going back. Using the browser builtin back to dumps you on the wrong page half the time.

      I also once looked at Firefox’s task manager view and saw that the console was using 5.5GB of RAM across 5 tabs! I don’t understand how a team can allow such egregious memory leaks.

      1. 1

        It’s not so much a memory leak as it is just simple misuse. The article points out some problems: e.g. they load the same 200kb JS object in two places. This is a problem because: 1) if it was a json, then the scripts loading this would benefit from one-another and from browser cache and 2) now this thing gets instanciated twice in memory. So that looks very likely like a 400kb of possibly unneeded stuff per tab. Like I’ve said, not directly a memory leak, just bad misuse. (Although from a team that works on maintaining their cloud stuff, you could argue that it actually is a memory leak, to have a client so badly done.)

        1. 2

          What I’m specifically talking about is definitely a memory leak, it’s different from what you described. I frequently open the details for a Deployment on the GKE workloads page to check the status of a code change and to look at logs. I usually leave the tab open because navigating to that page is so slow.

          Over time it creeps up in RAM usage, the worst I a saw was a single page taking 2.45GB of RAM. It must be from polling for some updates in the background and never cleaning up the old state. What’s also amazing to me is that I can run kubectl describe fooand it takes about a second with pessimistically 100KB of output data, yet just clicking refresh status button on the already loaded page with the same data takes several seconds.

    2. 14

      The growing bloat on the web really kills websites for me. One recent example is the new reddit design, which made me quit it altogether (among other reasons).

      Why does it always need to be lazyloading Ajax-crap? JS-generated transitions are always horrible and clunky. Let’s hope there will be a move towards a more sustainable and suckless direction at some point in the future.

      You don’t need Javascript in many many cases, and if you do, a few kB will do just fine. And you don’t especially need it for overloaded UI-orchestration past the UI-model the browser provides and is optimized for.

      1. 6

        old.reddit.com still works fine; say about the new UI what you want, but at least they’re not forcing it upon you.

        1. 3

          The new Reddit redesign is such a disaster. After clicking on a post, I frequently find myself scrolling the background (the list of posts) rather than scrolling through comments in the post itself. It frequently takes many seconds for the website to respond to clicks on ridiculously powerful hardware. Scrolling through subreddits with many image posts bogs down the site completely, probably because infinite scrolling + lots of images and gifs + no technical competence is a predictable disaster. Searching in subreddits literally doesn’t work; when I search something, more often than not, the site will just say ‘No results found for “”’.

          I obviously mostly use the old website (which is also a design disaster in many ways, but at least it works). I just don’t understand how a team could see the result that is the redesigned website and be happy with it.

          1. 6

            You bring up really good points and explained the problem well! It’s especially shocking when you browse the modern web with an older computer. I booted my old Mac mini from 2008 and was really sad to see that it was impossible to browse the web without massive lag and problems (text-only and light sites were just fine). Do we really want to waste all our advances by just keeping up with more and more useless cruft that brings essentially zero benefit to the end-user?

            A good case is YouTube: They’ve stuffed their video pages with megabytes of Javascript, Canvases, AJAX-magic and whatnot, and even though it’s probably 4 orders of magnitude heavier than the video page from 10-13 years ago, it essentially does the same thing, while actually being worse at it, because it’s often unbearably sluggish and clunky. I often press the “back” button in my browser to return to the previous video, only to find out that their “history-emulation” in Javascript failed to keep up.

            1. 6

              I just don’t understand how a team could see the result that is the redesigned website and be happy with it.

              There are millions of web programmers in the world. I doubt 95-99% of them would ever be an engineer if not for the current job market offering good career prospects, rather than being an engineer at heart.

              The rush and the satisfaction of doing a good piece of engineering just doesn’t resonate with these people. Working in a company with hip factor, perks, following js-trend-du-jour because it’s trendy rather than for its technical merits, and a pay check. This is what the majority of the developers (specialy web developers due to lower entry barrier) care about. Never do they stop 5 minutes and think: “why are we doing this? what value does this provide to society? What are the advantages and disadvantages of switching a legacy product with a new flashy one with a material design UI, even if 1000 times slower?”. These essential questions don’t matter for the bulk of web developers. What matters is a pay check and a quasi-religious sense of belonging to the users of this or that stack, preferably one generous handing out t-shirts (see hacktober fest fiasco) and/or stickers.

              Why writing a clean, elegant piece of software in C or pascal, well though with strong theoretchical foundations, if you can hack together a buggy, yet flashy version with deno and get thousands of github stars? Who cares about code elegance… pfff… github stars man! That’s where is at!

              1. 9

                I can replace the word “web” with “C” and substitute the relevant misgivings with those from a 80’s die-hard assembler/ALGOL/Lisp programmer to make it sound like it was from 1991. Your comment would still be just as ridiculous as it is now.

                1. 5

                  s/ridiculous/true/

                  :-P

                  1. 2

                    I strongly disagree with you there. C vs assembly actually provides a lot of benefits with minimal losses in regard to performance, and back in the 90’s, we didn’t have this hipster-culture around it that we see today with web-development. @pm is spot-on with his analysis, in my opinion.

            2. 4

              This is exactly the reason why I stopped using Google Cloud for my personal projects two years ago. I was always irritated when I needed to do something in the UI.

              What’s funny is that Google is the company advocating for faster content loading. But then even their landing page scores 17 out of 100 (desktop 70/100) on the Page speed

              1. 4

                While the performance of static pages tends to be dominated by render-blocking network requests

                In (maybe) the majority of cases I’d say static sites aren’t render blocking. If you go to your devtools and set throttling to 56kbps, then click a link on the orange website with >100 comments, you will see a useful section of the page load well before all of the html is downloaded - browsers doing this kind of trickery outweighs almost all js performance magic in my experience. (I’d be interested in a rundown of these kind of tricks and how to optimise to them if anyone has a link to hand).

                1. 2

                  The core of it is simple, but not easy: deliver enough information to start rendering. Practically, that means a short block of inline styles, followed by content, with no style or script tags that reference separate resources (put those after the main content).

                  1. 3

                    I wouldn’t overthink this. Just serve a simple HTML-document and refer to an external stylesheet using “<link>” or “<?xml-stylesheet>” (rarely known but really cool) that is shared across the site. Once it has been loaded just once, every consecutive access of other sites yields instant styling, because the browser cached it.

                    If you overthink it with inline styles and orderings, the browser won’t be able to leverage caching, and when you put style declarations at the end the browser has no chance to “invoke” the cached style until the HTML is fully loaded, which might make a difference if you have a really slow connection or a very large HTML-document (e.g. a large table).

                    And even the first load, which I mentioned earlier, won’t get harmed too much by missing external CSS, given browsers are optimized enough to immediately send a request (on an open keep-alive connection, which is the default) for an external CSS as soon as they “read” the -tag. The CSS-data will start streaming in at best after just one RTT, which is just a few ms, making it comparably fast to inline-CSS, with the added bonus of the aforementioned possibility for caching.

                2. 2

                  It’s good to see that it’s not only my computer. I used to do a few neural network jobs on Google Cloud some months ago and it was so painful to navigate the UI. And even worse, whenever I had a specific site open (I think an overview page for my neural network jobs, where they rendered two simple CPU/memory graphs) my browser went through the roof with RAM usage and my PC got extremely slow.

                  It’s also good to see that Google is starting to become a regular company like all others and produces bad quality software. This might give some room for other companies to show customers/consumers that there can be better alternatives to Google. Because loading time is something that can be immediately experienced.

                  1. 1

                    Gmail has been unusably slow for a good decade by now. Why does every interaction lag perceptibly?