1. 32
  1. 30

    I’m glad that there’s attention being paid to the impact on battery life.

    There’s a UX reason too. I find animations, including ones as slow as 1Hz, are distracting when they’re anywhere in my field of view. Having something constantly changing on screen that isn’t desired to be the primary focus of my attention is well.

    On a not quite tangential note, Juri Linkov’s page about turning off cursor blink has improved my life.

    1. 21

      I wish this kind of thing was treated as a real accessibility issue and taken more seriously by software vendors. I can’t count how many times my head has involuntarily jerked to the side because my mail client has flashed a “Checking for new mail” message up on my other monitor for a split second. (I have ADHD, so it really is an involuntary reaction; “just ignore it” is not something I can do.)

      1. 17

        I think even without ADHD it’s hard to ignore things moving on your screen. Animated emojis, gifs, etc. are hard to impossible to get used to really.

        And when it’s something that shows you stuff that is sometimes of interest, like new mails, is even worse.

        I don’t want to know how much focus time has been lost due to such things.

        1. 4

          I agree. I find it painful to read a lot of news article without an ad-blocker these days due to the sheer number of moving things on the screen.

        2. 6

          Too fuckin’ right.

          This reminds me that I would like to high-five whoever on the iOS team realised that “reduce motion” was an a11y thing and put it in the OS.

        3. 8

          It still feels weird to me, I guess, that the arguments about battery life and performance and distraction are used not to suggest a default-off state, but to rule out ever allowing the thing to be on at all. The arguments about multi-user Terminal Server instances are particular disingenuous, given the high percentage of de facto single-user Windows deployments in the wild and the fact that corporate/multi-user deployments basically always disable a bunch of things via policy configuration.

          Couple with the fact that Apple does give an indicator of seconds on what should, theoretically, be more resource-constrained devices, and it just is really difficult for me to accept the apparent hard “no” on this from Windows (Apple’s implementation is for the iOS clock app’s icon to be a functioning analog clock, complete with second hand sweeping around in more or less real-time — accurate enough that I’ve used it to set and sync up clocks on appliances around my home).

          1. 4

            It’s a bad option. There’s no point adding a config setting to let the user do something bad on purpose (except for the Hot Dog Stand colour theme, that thing was a work of art and it needs to be reinstated).

            The arguments about Terminal Server are not disingenuous. Lots of people who do run multi-user Terminal Server instances have really expensive support contracts. Those people pay Microsoft a boatload of money by buying loads of Client Access Licenses.

            Apple does give an indicator of seconds on what should, theoretically, be more resource-constrained devices

            Your iPhone spends most of its time with the screen completely off. Apple have some options that Microsoft don’t, like putting in heterogenous CPUs so the power-hungry CPUs stay completely off.

            1. 9

              Your iPhone spends most of its time with the screen completely off.

              macOS also has an option in system settings to show seconds in the time, which would be equivalent to any Windows laptop or desktop.

              I’m with ubernostrum, it feels strange. As a user, why shouldn’t my computer show seconds in the taskbar if that’s useful to me? My computer is a sophisticated device reflecting decades of continuous development by both Intel and Microsoft. Now it’s somehow a problem to update a number on a screen once per second, something that was straightforward in the 80s?

              Raymond doesn’t go into real detail about why this operation is so expensive and I suspect that’s where the trouble lies. For whatever reason, probably layers of ancient abstractions that are difficult to unpick, this simple operation happens to be rather expensive on Windows. They’ve invented a narrative that users don’t need it because it’s an easier justification than fixing whatever made it slow in the first place.

              1. 5

                I don’t know enough about Windows’ internals but it’s worth remembering that Windows is a general-purpose OS that runs on general-purpose hardware. Not all of that hardware has the power management capabilities of Apple’s hardware, and unlike Apple, Windows is bound to support all sorts of configurations – good ones and bad ones, most of which no one at Microsoft has ever seen or tested. A generic power management framework that never has to target more than a dozen or so configurations, all of them specifically tailored for that platform’s power management requirements to some degree, is bound to allow more flexibility than one that has to target pretty much everything.

                Updating things on the screen once per second is not much more complicated than it was in the 1980s but explaining some users why their battery life dropped all of a sudden (and figuring it out over the phone, no less) is a whole other story. And it’s particularly hard to justify it when the most of the things that people need seconds in a taskbar for can be achieved just as well – in fact better – with the timer app that Windows has been shipping by default for like ten years now.

                (Edit: also, as @Jaruzel pointed out here, you can actually do it, so it really is a case of “it’s off by default”, more or less. I vaguely remember having a seconds display in the taskbar way back when I was using Windows 2000 but I don’t recall if it was available via a registry hack, or I installed some special-purpose tool for it…)

                1. 5

                  The option to have seconds is not a new thing on macOS. I have had it enabled since OS X 10.2 on a 1.25 GHz G4. The thing that makes efficient on macOS is that the kernel supports timer coalescing and exposes this from kqueue on upwards. You can ask for a notification in a second, plus or minus 100ms, and it will run you early or late depending on whether other things need the CPU to be awake. Generally, on a system with someone sitting in front of the display, at least something will need to wake up the CPU at least once per second. Updating the timer is not likely to use more RAM than fits in the L1 cache these days (you have a set of 10 glyphs that you cycle through, they’re all in a single texture, you’re just updating the coordinates for the next frame that’s rendered on the GPU).

                  The system load from enabling seconds display hasn’t been noticeable on any Mac I’ve owned.

                  On Windows 3.0, the Clock app could display a second hand. You could kill my 8086 or 386 system by launching enough instances of the clock. The difference between a clock with and without the second hand on an 8 MHz 8086 with 640 KiB of RAM was noticeable, but the clock used something like 10% of my total RAM, so it wasn’t something I’d leave running most of the time. By the time I had a Pentium, a clock with seconds was not a performance problem at all.

                  1. 3

                    Oh, the way I remember it, timer coalescing on Windows was a mess, for sure, especially on multiprocessor systems. IIRC at some point it was exposed through a separate API, so legacy applications didn’t benefit from it.

                    But I was thinking about it at a slightly different level. For example, a while back, I had the “pleasure” of troubleshooting a curious case of serial hard drive failures. We had some obscure Toshiba laptops that we handed out at work (this was around 2011 so desktops were still pretty common, but some folks wanted to work from home, or had to work while traveling), and they were chomping down hard drives for everyone except us nerds. One of them burned through four of them during the two-year warranty, which is when we figured out what was happening.

                    The (BIOS? hard disk?) firmware tried to be smart and spin off the drives when it got into some low power mode. But it was also really eager: with Windows 7’s default energy saving profile, listening to a three-minute song could cause the disk to spin down even ten times, depending on encoding and the like – unless something else kept the CPU really busy, like, say, running a compile job. That’s why us nerds got longer lives out of them (well, either that, or we ran Linux, so the power management policy was YOLO). A combination of silent hard disks and unbearably loud fans meant that no one realized what was happening until the first warranty ran out – I changed the hard disk with some cheap, ancient drive I had lying around, which was loud enough that I could hear it spin up and down under my fingers.

                    This is obviously an extreme example, but it’s extreme because it was instantly visible in the form of dying hard drives. It was also probably quite detrimental to battery life, though, as frequently spinning up a motor is actually pretty power-hungry. But there are plenty of similar quirks that don’t result in frequent hardware failure, so they go unnoticed. System-level power management has been (somewhat?) unified and centralised only recently. Until a few years ago it was distributed throughout the various components in the system and it was remarkably finicky.

                    (Edit:) Apple’s stack was far more immune to this sort of weirdness simply because it had to deal with far less hardware variability. While it’s definitely not like displaying seconds would (should?) involve that many moving parts in a system, I wouldn’t be surprised if it had all sorts of weird side effects on some hardware.

              2. 2

                There are a lot of things you wouldn’t want in a Terminal Server instance. Microsoft doesn’t remove all of them completely from Windows, though.

                And this is really the heart of the problem: someone’s asking for what seems a perfectly reasonable feature for their case — which is almost certain to be a single-user machine — and is told that the feature can’t even exist as an option because someone else’s use case wouldn’t want it turned on.

                Your argument about the iOS clock also doesn’t really work — if it’s possible for another OS vendor to avoid consuming the entire power station’s worth of electricity that’s apparently required to display a clock with seconds, then it’s possible for Microsoft to do it, too. Just change the mode of the thing when the display is off/asleep so that it doesn’t bother trying to keep updating. After all, they’ve already got the clock there displaying hours and minutes — if there’s a way to let the system sleep despite that, then there’s a way to let it sleep with seconds, too.

            2. 2

              Yup. Reinstalled Windows 10 on an old PC to get rid of cruft. Saw the little weather report on the task bar and disabled it right away.

              Of course, nothing compared to Windows 8’s default start menu, with more than a dozen widgets constantly animating. (shudder)

              1. 1

                I’ve always kept the blinking colon between hours and minutes turned on, just so I can be sure when things have really crashed.

              2. 9

                One can easily put seconds on the clock in the taskbar on Linux. I never do, though, simply because the last thing my 7 milliseconds attention span brain needs is another blinking thingy in my field of vision.

                1. 6

                  FYI, showing seconds CAN be done. It’s a registry key change. Some of those ‘Tweak Windows’ apps can set if for you if you want it.

                  1. 2

                    This kills me about Windows and OSX.

                    They’re “user friendly” until you want to change something, and then you’re jumping around sketchy websites, entering registry hacks from strangers.

                    “Edit this config file,” on Linux may not be the best UI, but at least it’s (usually) documented and consistent for all of the config settings.

                  2. 5

                    Small, frequent, animations can have remarkably outsized impact on battery life on any device from a phone to a laptop.

                    A lot of power savings in modern machines is achieved by turning things off as often as possible. You are (to a point presumably) better off running a system twice as fast for half as long, because even though running some component X times faster uses more than X times as much power (clock speed vs power is super linear), you are often keeping many other things running, and using power, at the same time.

                    The result is that in isolation more power for a shorter period ends up coming out ahead.

                    The problem is that there is a power cost involved in bringing full performance up, and spinning things back down. Those costs mean that if your down time is not sufficiently contiguous, trying to spin up and down will end up being both slower, and using more power.

                    As a kind of equivalent scenario, imagine you have a task that takes 1000ms to complete. You could run it and be done. But imagine you’re blocking the UI because your JS on a webpage (again, we’re looking at the general idea, not the core power at a low level, though they are similar). You solve this by splitting the task in two and using a zero delay timeout after the first part, to trigger the next. You get an immediate improvement: 2fps! You realize 2fps is still a bit sluggish so divide into 100 steps, and now everything scrolls smoothly and is good.

                    If you look now though you see the total runtime is more than 1000ms (it’s probably longer than the theoretical ~1100ms), if you made things even more extreme and split into 1ms sections you’d see an ever larger perf impact. That impact is similar to what the OS is trying to manage as it changes power states in response to demand.

                    The problem with a lot of “small” animations and similar is that they don’t take long to run, but have to be done repeatedly, and frequently. For animation especially the frequency is often very close to the point where you can’t drop power states, etc

                    There are things you can do as a developer to mitigate this. I have never really been an app developer (UI isn’t my jam), but in things like webkit it is necessary to handle many different sources of animation. It is critical for power reasons that webkit does extra work to coalesce different “animation ticks”. If multiple things have ticks at the “same” period, they get automatically coalesced into a single system timer(the OS is also doing this, but has less visibility into the whys of any given timer). Even if you have ticks that are out of sync, if the period is sufficiently similar and the phase difference is small you can drift things together. All of this is necessary for battery life on modern devices.

                    Even modern wireless protocols are designed along these lines. Take Bluetooth LE, the protocol works with fixed length transmission windows, that have relatively short link periods. I have mercifully never worked at the actual hardware level, but when working on software that had to consider Bluetooth LE links, I recall being giving very specific amounts of time (ms) in which all the work had to be done and have data ready. If the data was not ready when the window rolled up, then the data couldn’t be sent.

                    It’s super annoying to deal with a connection like that, but on the other hand it does mean you can actually have a connection at all, where previously power usage would have ruled it out.

                    Anyway, I think I had a point when I started typing this, but I got distracted by other things multiple times and I think I started it 5+ hours ago now :)

                    The core idea is: lots of “short” or “simple” things for modern hardware can easily be made to have outsized power usage, not due to the amount of runtime, but simply by happening too frequently. Unfortunately a lot of animations would ideally happen at periods that interfere with modern power states, etc

                    1. 4

                      What sort of system does not have desktop redraws for an entire minute? It seems to me the solution here is just to put the computer in sleep mode, at which point it won’t have to draw the taskbar anyways. If the computer is in active use, it will certainly have many tasks to do every second, let alone minute, and the clock redraw will be an irrelevant footnote.

                      1. 3

                        A lot of this thread has me feeling like I stepped into some sort of alternate universe where calculating and displaying the time with seconds is somehow the most resource-intensive operation imaginable. At least one of Microsoft’s major competitors has managed to do this in both their desktop OS and their mobile OS, for crying out loud. And last I checked, Edge doesn’t forbid you just opening a browser and writing a setTimeout JS loop to display the time with seconds, nor does it cause the machine to catch fire or consume all electricity and RAM within a 1000-kilometer radius. So the issue is not “it can’t be done” nor is it “can’t be done without unacceptable performance”. It’s most likely “can’t be done (or can’t be done with acceptable performance) with the way Windows is architected”, and everything else is an attempt to distract from that.

                        1. 4

                          I can’t speak to edge, but I can say in webkit timers (by which I mean everything periodic: setTimeout, setInterval, GIFs, SVG animations, CSS animations, presumably myriad other things) are aggressively coalesced. Timers for background tabs are throttled - you can ask for a 1ms timeout, but you are not going to get anything close, if anything at all.

                          In the past timers/callbacks asking for ridiculously short timeouts would get them for a few ticks, before they’d get throttled back to 15+ms, for some sites it was needed because of bad code - e.g. sites that would just be burning 10-20ms of cpu per tick so battery life was hosed no matter what, but others would just be updating a tiny piece o UI and be running for fractions of a ms. Even for those sites though the battery drain from not throttling is insane.

                          1. 1

                            It sounds to me like this is mostly just an issue of not being able to designate tasks as “low-prio background” at the framework level? Like, if you could update the taskbar clock without reflexively kicking the CPU into high power for half a second, would that fix things?

                      2. 3

                        I am very sorry. But this does not make sense in 2022.

                        • Multi-user systems do not need to worry about battery life, they are servers in the cloud/VM farm.

                          • Additionally, if repainting a 300x600 (ish?) pixel square almost identically across, say, 1000 desktops, is a struggle, I’d start getting very shirty with the rendering team. I have 16 CPUs and 32GB of ram in my laptop, running at (max 2.5Ghz). That is a lot of cycles for some basic pixel blitting. If updating the clocks is taking all of that CPU, that points to some pretty unpleasant statements about the state of the code and more…..
                        • Coalescing screen repaints is a thing that can be done

                        • A server configuration is not a single-user configuration; variances are possible

                        • On-battery config and on-direct power config are conventional settings; variances in repaint could be done there as well.

                        I don’t like second-hand ticks by default, they are too twitchy for me when I’m just sitting and thinking. If that’s the real answer - the UX team has decided not to do it - then fine. But these technical reasons don’t pass the sniff test to me. Very sorry.

                        1. 2

                          There are 800 million Windows devices. If you assume 1% of them have some weird power management issues where updating the clock every second would prevent the CPU from entering a low power state, costing 10 watts, then the world saves 80 megawatts of electricity by not having this setting. Equivalent to an entire small power plant, or several jet planes running 24/7!

                          1. 2

                            I remember my first multitasking experience.

                            On a basic A500 (68000, 512KB “Chip RAM”), I opened tens of clocks (Workbench 1.3’s :utilities/clock), and they were all running without issue.

                            Impressed with the level of bloat Windows 95 must have had, to not be able to update a single clock in the taskbar once per second, on an actual 386 with its 32bit ALU and higher IPC.

                            These days, with a multicore 64bit GHz+ machine, I enjoy i3status and its clock updating every 5 seconds.

                            1. 9

                              Impressed with the level of bloat Windows 95 must have had

                              You should try to understand things instead of insulting things. It easily could do it, but (as the linked article describes) it came with a cost, and that cost meant that in certain circumstances, it’d harm performance on something the user actually cared about in the name of something that was expendable.

                              Could you open tens of clocks on your old system while running another benchmark task that actually needed all the available memory without affecting anything?

                              With the one minute update, the taskbar needed only be paged in once a minute, then can be swapped back out to give the running application the memory back. A few kilobytes can make a difference when you’re thrashing to the hard drive and back every second.

                              1. 4

                                it’d harm performance on something the user actually cared about in the name of something that was expendable.

                                AmigaOS isn’t just preemptive, it also has hard priorities. If a higher priority task becomes runnable, the current task will be instantly preempted.

                                Could you open tens of clocks on your old system while running another benchmark task that actually needed all the available memory without affecting anything?

                                “All the available memory” means there’s no memory for clocks or multitasking to begin with.

                                Taking over the system was as easy as calling exec.library’s Disable(), which disables interrupts, then doing whatever you wanted with the system. This is how e.g. Minix 1.5 would take over the A500.

                                Alternatively, it is possible to disable preemption while still allowing interrupts to be serviced, with Forbid().

                                With the one minute update, the taskbar needed only be paged in once a minute

                                Why does the taskbar use so much ram that this would even matter, in the first place?

                                1. 2

                                  “All the available memory” means there’s no memory for clocks or multitasking to begin with.

                                  Windows 95 supported virtual memory and page file swapping. There’s a significant performance drop off when you cross the boundary into that being required, and the more times you cross it, the worse it gets.

                                  Why does the taskbar use so much ram that this would even matter, in the first place?

                                  They were squeezing benchmarks. Even a small number affects it. Maybe it was more marketing than anything else, but still the benefit of showing seconds are dubious so they decided it wasn’t worth it anyway.

                              2. 3

                                I’d imagine context switching is much faster on Amiga OS, since there’s only a single address space and no memory protection.

                                1. 2

                                  The 68000 has very low and quite consistent interrupt latency, and AmigaOS did indeed not support/use an MMU, but I don’t see how this is relevant considering how much faster and higher clocked the 80386s that win95 requires are.

                                  1. 3

                                    I think maybe you give the 80386 too much credit. I don’t think the x86 processors of the day were really that much faster than their m68k equivalents, and the ones with higher clock speeds were generally saddled with a system bus that ran at half or less than the speed of the chip. Add on the cost of maintaining separate sets of page tables per process, and the invalidation of the wee little bit of cache such a chip might have when switching between them, and doing all of this on a register-starved and generally awkward architecture.

                              3. 1

                                Amazing ad post, however, when right clicking the taskbar you have to wait half a second for the menu to show up since Windows 10. It was instant before. Why isn’t Microsoft fixing their product before thinking about new features?