Induced demand strikes again. Making things better often causes changes in behavior that makes things worse than they were. These are fascinating issues that often highlight our lack of understanding of feedback cycles, emergent behavior, and unsteady oscillations that can seem kind of similar to turbulence models of fluid dynamics.
This resonates a lot. Just yesterday I was helping someone with their laptop, they couldn’t get any work done because the machine was becoming unresponsive after opening the browser and a couple of PDFs. I was shocked and tried to find solutions:
Use multiple profiles in the browser to avoid wasting memory with open tabs
Disable as many background services as possible while keeping the machine usable
Switch to sumatra pdf instead of adobe
It pains me that we reached such situation where even the most basic of work has become unbearable and requires the latest machine. This also is a new barrier for people that cannot afford economically to have a hardware solution to a software problem.
As for the slow web, simply install a “disable javascript” extension and only enable it on tabs that need it. You’ll be surprised how many news sites and such will be drastically improve by this.
We carefully monitor startup performance […] this test can never get any slower.
Is it literally as simple as assert(lastTime >= thisTime)? If the time randomly varies a little, how do you avoid spurious test failures? If you add some fudge factor like assert(lastTime + extraTime >= thisTime), do you end up allowing many small regressions?
I mean, that’s what we do at work. We make sure every release is faster than the previous, with very few exceptions. Then again, the whole point of our team is performance…
Induced demand strikes again. Making things better often causes changes in behavior that makes things worse than they were. These are fascinating issues that often highlight our lack of understanding of feedback cycles, emergent behavior, and unsteady oscillations that can seem kind of similar to turbulence models of fluid dynamics.
This resonates a lot. Just yesterday I was helping someone with their laptop, they couldn’t get any work done because the machine was becoming unresponsive after opening the browser and a couple of PDFs. I was shocked and tried to find solutions:
It pains me that we reached such situation where even the most basic of work has become unbearable and requires the latest machine. This also is a new barrier for people that cannot afford economically to have a hardware solution to a software problem.
As for the slow web, simply install a “disable javascript” extension and only enable it on tabs that need it. You’ll be surprised how many news sites and such will be drastically improve by this.
This hasn’t been true for ages. Most sites won’t render a god-damned thing without JavaScript.
Is it literally as simple as
assert(lastTime >= thisTime)
? If the time randomly varies a little, how do you avoid spurious test failures? If you add some fudge factor likeassert(lastTime + extraTime >= thisTime)
, do you end up allowing many small regressions?It’s not that simple; some of the way it works is documented at https://chromium.googlesource.com/chromium/src/+/master/docs/speed/addressing_performance_regressions.md , but the data support for this is gated to Google employees.
I’d guess that to avoid this they would run the benchmarks many times and define an acceptable average+stddev, or some other statistical measure.
We have this fun problem that one of the speed tests is so tight that if the CI is completely overloaded it fails.
But it’s rare enough that it’s a good sign without any real false positives.
I mean, that’s what we do at work. We make sure every release is faster than the previous, with very few exceptions. Then again, the whole point of our team is performance…