1. 42
  1. 8

    I’m slightly disappointed to see that this article is mostly about making Firefox look faster rather than actually making it faster.

    I’m also curious, what does XUL.dll contain? I remember reading articles about replacing XUL with HTML for interfaces, why is XUL.dll still needed?

    1. 22

      The visual and perceived performance wins are arguably easier to explain and visualize and were an explicit focus on the for the major release in June. This isn’t just lipstick on a pig though. An unresponsive UI is a big. Regardless of whether the browser doing work under hood or not.

      But the IOUtils stuff has some really clear wins in interacting with the disk. Process switching and process pre-allocation also have som really good wins that aren’t just “perceived performance”.

      1. 5

        But the IOUtils stuff has some really clear wins in interacting with the disk. Process switching and process pre-allocation also have som really good wins that aren’t just “perceived performance”.

        No numbers were provided for these unfortunately. :’(

      2. 11

        I’m also curious, what does XUL.dll contain? I remember reading articles about replacing XUL with HTML for interfaces, why is XUL.dll still needed?

        That’s basically “the rendering engine”. The Gecko build system uses libxul / xul.dll as the name for the core rendering code in Firefox. There’s no real connection between the file name and whether XUL elements are still used or not.

        Not sure why it’s not just named “Gecko”, but that probably requires even more archaeology…

        1. 3

          It’s because XUL refers to ‘XML User Interface Language’, which is how Gecko was originally meant to be interfaced with. Gecko sits under XUL, and XUL hasn’t been completely replaced yet.

          “There is no Gecko, only XUL”

          1. 2

            I see, thanks!

          2. 4

            I’m slightly disappointed to see that this article is mostly about making Firefox look faster rather than actually making it faster.

            User-perceived performance can be just as important as actual performance. There are tons of tricks for this and many go back decades while still being relevant today. For example: screenshotting your UI to instantly paint it back to the screen when the user reopens/resumes your app. It’ll still be a moment before you’re actually ready for user interaction, but most of the time it’s actually good enough to offer the illusion of readiness: a user will almost always spend a moment or two looking at the contents of the screen again before actually trying to initiate a more complex interaction, so you don’t actually have to be ready for interaction instantly.

            IIRC this is how the multitasking on many mobile operating systems works today – apps get screenshotted when you switch away from them, and may be suspended or even closed in the background while not being used. But showing the screenshot in the task switching UI and immediately painting it when you come back to that app gives just enough illusion of continual running and instant response that most people don’t notice most of the time.

            1. 1

              Yeah but what’s better, implementing complex machinery to make your slow software look faster, or implementing complex machinery to make your slow software faster ? I’d argue that making the software actually faster is always better, and if it is faster, it’ll look faster too, no need to trick the user.

              I agree that there comes a point where you made your software as fast as it can be and all that remains is making it look faster, but that still makes for disappointing articles to me. I prefer reading about making software faster than reading about making software perceptually faster.

              1. 5

                What’s better is for it to be faster and more usable to the user, regardless of the method. The above noted screenshotting/painting is more than a trick. It gives users the ability to read and ingest what was already on the screen which gets them back to what they were doing faster. That’s much more important than, say, a 50% reduction in load time from 2s to 1s. Those numbers are satisfying for people who love to look at numbers, but really doesn’t mean anything to the end-user experience.

                1. 2

                  That’s the thing: sometimes speed isn’t a good thing. For instance, you could have your UI draw to the screen as fast as possible, but if you do that, you’ll end up with screen tearing, which makes the user experience worse. If you slow things down a tad (which doesn’t consume any resources, because the software it just waiting), the UI gives the perception of working better. Also, some slowdowns are there to give feedback to the user, such as animations when you click buttons, or resize things: these give the perception that something is happening, and create a causal link in the user’s head between what they just did and what’s happening, which it harder to get when something just appears out of nowhere.

                  It’s not about tricking the user, even if there happens to be some smoke and mirrors involved, but about giving the user feedback. People like things to be fluid (which is what screenshotting a window for fast starts gives you), not abrupt. You might say that you’d be OK with this, but to give you a real-world example: if you were taking a taxi, would you be OK with your driver taking hard turns even if it got you to your destination a bit faster? Unless you were under severe time pressure, probably not.

                  If you want to be genuinely disappointed, there are user interfaces out there that introduce delays for other reasons. You’ve probably encountered UIs in the wild that seem to take a longer time to do things that seems reasonable, such as giving the result of some sort of calculation or some search results for flights or hotel booking. Those delays are there not because they serve a purpose, but to increase trust in the result. This is because people’s brains are broken, and if you give them an answer straight away, it seems as if you’re not doing any work, which makes the result less trustworthy. However, if you introduce a short delay or give the results back in chunks, it gives the perception that the machine is doing real work, thus making the results more “trustworthy”.

                  So no, faster is not always better, much as we might wish it to be.

                  1. 1

                    For instance, you could have your UI draw to the screen as fast as possible, but if you do that, you’ll end up with screen tearing, which makes the user experience worse. If you slow things down a tad (which doesn’t consume any resources, because the software it just waiting)

                    This is a bad example. Doing things as fast as possible and then waiting for the next frame is the best thing to do, it allows the CPU to go back to idling and preserves battery. Making the software faster here means more time idling means more battery saved.

                    Also, some slowdowns are there to give feedback to the user, such as animations when you click buttons, or resize things

                    I hate animations and always disable them when I can. I understand that other people feel differently about them, but I don’t care, that still makes reading articles about perceptual performance improvements disappointing when I go in expecting actual performance improvements.

                    You might say that you’d be OK with this, but to give you a real-world example: if you were taking a taxi, would you be OK with your driver taking hard turns even if it got you to your destination a bit faster?

                    That’s a bad example. Having abrupt screen changes is different from being thrown around in a car.

                    such as giving the result of some sort of calculation or some search results for flights or hotel booking. Those delays are there not because they serve a purpose, but to increase trust in the result.

                    Making things perceptually slower is not what we are talking about. We are talking about making things perceptually faster.

                    So no, faster is not always better, much as we might wish it to be.

                    Making your software actually faster when you want it to be perceptually faster is better than just making it perceptually faster. That was my point and I don’t think any of your arguments proved it wrong.

              2. 2

                It’s a legacy name.

              3. 6

                Does anyone know when fission is shipping in stable? All I can find is “later this year”.

                1. 7

                  I think it depends on hardware capabilities and early results from beta testing. These big rewrites have a serious risk of being negatively impacted by third party software (cough cough antivirus) and so on.

                  My bet is before December. I’m already run but very successfully on Nightly for half a year. You can set the order manually if you want.

                  1. 5

                    If I were to enable it would that help via telemetry?

                    1. 4

                      Yeah telemetry would get us e.g., performance metrics, but if you don’t want that you can also submit crash reports. Though I hope there are none :-)

                      1. 2

                        And it’s on. Gotta say I really like the about:processes page too. Really cool to see a per domain view of memory usage. Definitely feels a lot snappier out the gate.

                        For anyone reading along; https://wiki.mozilla.org/Project_Fission

                2. 2

                  https://data.firefox.com/dashboard/user-activity

                  Sadly soon there will be nobody to experience any speedups, maybe it is time to rethink their development.

                  1. 15

                    Oh please. Do you really believe they should just give up because they now have >100M active clients in the last month?

                    1. 3

                      No, maybe it is time to stop cutting features left and right, making it more like chrome and focusing on irrelevant things unrelated to web browsers.

                      1. 2

                        And allowing direct-to-Firefox donations.