1. 50
  1. 26

    Performance is hard!

    I strongly suspect that it’s not. When software is slow to the point where users perceive it, it’s usually because it is 1000 (1 thousand) times slower than the theoretical maximum. Or worse. So while attaining that theoretical maximum is both hard and pointless, how about reaching for “merely” 10% of that speed? Heck, even 1% would be a massive improvement in many cases.

    Achieving that level of performance doesn’t seem hard to me. I would say the hard part is more about writing simple programs and do away with all the crap we were taught we need, but don’t.

    1. 13

      I don’t think it’s as simple as “crap we were taught we need” … I’d say performance isn’t hard in small programs, but it IS very hard in big programs, and big programs tend to be the one we use.

      I think there is a phenomenon very related to Spolsky’s Good Software Takes 10 Years

      https://www.joelonsoftware.com/2001/07/21/good-software-takes-ten-years-get-used-to-it/

      i.e. I see this pattern:

      1. A successful big program usually starts out as a smaller one that performs well, for say 5 years. e.g. Chrome or Firefox were both great when they were “new”. Or famously Winamp, or various chat clients.

      2. The “market” puts pressure on the program to adopt new features. If it doesn’t adopt those features, then there’s a DIFFERENT program that will, and THAT one will become the big slow one that we use. i.e. if Firefox stayed small and lean (as it was in ~2005 when I started using it), we would use it even less than we do now, compared to Chrome.

      3. And then the architecture of the program is “sheared” and distorted by all these new features. It no longer resembles the original program.

      4. The new developers don’t understand the code like the original ones did. They ONLY know how to add features in a suboptimal way that is slow.


      In other words it reminds me of this good observation from Ousterhout:

      https://web.stanford.edu/~ouster/cgi-bin/sayings.php

      The most important component of evolution is death

      It is easier to change existing software than to build new software, so software tends to live a long time. To a first approximation, software doesn’t die (compare this to the hardware in a computer, which is completely replaced every few years). At the same time, it is difficult to make major structural improvements to software once it has been shipped, so mistakes in early versions of the program often live forever. As a result, software tends to live too long: just good enough to discourage replacement, but slowly rotting away with more and more problems that are hard to fix. I wonder if the overall quality of computer software would improve if there were a way of forcing all software to be replaced after some period of time.

      So basically all the big programs that we use are 10+ years old, and they have architecture rot. They live too long and are hard to replace!

      This also reminds me of some recent complaints about LLVM’s speed. 10 years ago it was very “fresh” compared to GCC; it doesn’t seem as true now.

      Maintaining performance in these big codebases like LLVM and Firefox is basically a problem that I think we simply do not know how to solve.


      I wasn’t going to mention Oil, but now that I’m here – this is basically why I wrote it in 40K lines of Python, with a layer of indirection to C++, which gives us “leverage”. This is compared with 140K+ lines of C in bash, plus another 100K or 200K for the Oil language.

      I seem to get some “blank stares” at the 40K line claim … but to me it is extremely important, because a 40K line program has a drastically different “feel” than a 200K line program.

      The 100-200K line program is where you start to not be able to add features in a way that performs well, in my experience. You are kind of at the mercy of the existing architecture, and you can only add features in a certain way.

      Depending on the type of program, 100K to 200K line programs also seem to take ~10 years (similarly with 1M-2M line programs). I don’t really know of any widely used 200K line programs that were written from scratch in say 3 years (that are not “canned” code using a framework).

      1. 3

        I agree, and also in most cases the tradeoffs aimed at performance in the small are… not necessarily the opposite of, but mostly orthogonal to those we need for performance in the large; and that performance of the humans doing the programming will require a third circle in that venn diagram.

        1. 1

          The way I understand your comment, is that software complexity tends to grow out of control, and that makes it hard to get it to perform reasonably. It seems we’re in agreement: “the hard part is more about writing simple programs and do away with all the crap we were taught we need, but don’t”.

          I mean this is the hard part. Writing simple programs is not easy to begin with, and to remove the crap we need to identify it first. Given that one’s utter crap from hell is often another’s godsend, we need metrics. Ideally science when we can afford it. At least, some way to distinguish alternatives and reliably select the better ones.

          1. 3

            I think we are mostly in agreement, but I’m basically pushing back and saying “performance IS hard”, at least in the common context where you’re building something over a long period, with many constraints, and many users.

            “Everybody” knows how to profile a 1000 line program and find the hotspot, and rewrite a loop, etc.

            But pretty much nobody knows how to take a 10 or 20 year old behemoth like Chrome or LLVM and fundamentally improve its performance.

            I don’t think “restraint” is a good enough answer. It doesn’t really help us solve the problem.

            I think the main problem is that those programs are written by lots of people, and somewhere along the way the knowledge to make crosscutting and correct changes DIES. Performance improvements are such changes. I think it’s mostly a social / team problem, not one of individual discipline.

            Metrics are definitely important, and can be a great way to spread knowledge of how the code works.

        2. 12

          I think the measurement problem is real. From personal experience:

          This week, I sent out a call for testing for using snmalloc as the libc malloc in FreeBSD. I am really interested in any cases where it introduces performance regressions. I got a few replies of ‘it works great, thanks’, a few more of ‘I can’t build it’ (which are hopefully fixed now). Very few people actually have a way of measuring workloads that they care about to see if it’s made things different.

          For the last few years, I’ve been trying to get various folks to give me benchmarks of workloads that they actually care about for the CHERI temporal safety work so that we can measure how much we regress performance in exchange for temporal memory safety (it won’t be free, but hopefully cheap enough to turn on by default).

          1. 10

            One problem I’ve observed many times in open source projects is that people will reject an improvement if it doesn’t achieve the theoretical optimum, where the optimum for some people is always “the way it works right now”:

            • Users don’t know how to write their own tests, so adding tests to the template repo is pointless. Never mind that the tests have already highlighted bugs and will probably continue to prevent bugs in the future.
            • Somebody has been in the early stages of planning to one day support X for several years already, so we don’t want to improve any aspect related to X in the meantime. Never mind that the pie in the sky has yet to materialise after years.
            • The change improves performance by X%, but X could conceivably improve performance by more than X%. We don’t want to do both, so why bother with the X%? Never mind that users right now are suffering.
            • Using an automatic formatter will make it slightly harder to look for the origin of changes. So we’ll just continue not using a formatter, letting everyone add to the papercuts as we go.
            • Using version control means I have to add a whole bunch of process to developing and releasing the software. Never mind being able to sanely collaborate, or being able to actually review changes.
            1. 3

              The change improves performance by X%, but X could conceivably improve performance by more than X%. We don’t want to do both, so why bother with the X%? Never mind that users right now are suffering.

              What kind of thinking leads to not accepting even a marginal improvement now, while working on the larger improvement in the meantime?

              Using version control means I have to add a whole bunch of process to developing and releasing the software. Never mind being able to sanely collaborate, or being able to actually review changes.

              That’s an attitude I haven’t seen in 20 years. I thought with the market penetration of git, version control was a no-brainer? A non-programmer friend of mine tried to do a HTML+CSS course a couple years back, but gave up because the first thing they get you to do is create a GitHub account and install NodeJS; no matter how unsuitable for utter beginners all of this is, it’s considered table stakes nowadays.

              1. 3

                To your first question, at least “avoiding duplicate work” does that. Then add in some conflict avoidance so that negotiating for an exception to the rule becomes difficult and there you go.

            2. 3

              Achieving that level of performance doesn’t seem hard to me. I would say the hard part is more about writing simple programs and do away with all the crap we were taught we need, but don’t.

              I agree in principle. In practice, you’ll often get blindsided by the way a program is used - certain types of use patterns you may not have thought about may cause performance to degrade. Or the system just grows beyond its initial use - like for example with databases, as long as a table is small you won’t even notice the occasional sequential scan, and some tables remain that slow. It’s only when the data grows large enough that the system starts to slow down until it’s noticable - and then you add the necessary index. Often it’s hard to predict exactly in what way the data will grow in practice, and you can’t just add an index to every single column “just in case” - that would bloat the disk usage and slow down inserts and updates.

              1. 2

                That is certainly the impression I have.

                For example, I use bitwarden, the CLI tool can generate passphrases with bw generate -p.

                I like to generate 10 or 20 at a time and pick one that’s funny (important use-case), and it takes many seconds with bw, I had to sit there and wait, and I wondered how it can be so slow, so to find out I made a passphrase generator that’s not clever in any way, yet it’s about 6000 times faster than bw.

                But I didn’t have a language favoured by my employer (which alone could have ruined it with startup speed) or any other such concerns.

                1. 1

                  All optimization is hard. First, you have to understand both the problem domain and the operational characteristics of your software. Then, you have to have a creative insight that takes advantage of various facts from both of these domains. Then you have to implement it correctly.

                  Then there’s the hidden cost of optimization, which is: now your code is not as obvious as it used to be. This is a constant tax on reading the code, which then affects further changes downstream, because you can’t modify code that you don’t understand.

                  So I think it’s more complicated than “performance is easy, we just don’t do it.”

                  1. 2

                    All optimization is hard.

                    I’m not talking about optimisation, I’m talking about non-pessimisation.

                    Then there’s the hidden cost of optimization, which is: now your code is not as obvious as it used to be.

                    Actually, John Ousterhout remarks in A Philosophy of Software Design, that simpler programs tend to be faster than their more complex equivalent. I would personally suspect two reasons: first, simplicity often means wasting fewer CPU cycles on useless abstractions or otherwise unneeded bloat. Second, simpler programs are easier to actually optimise, if only because their bottlenecks are easier to spot.

                    1. 3

                      That book is full of speculation and opinions. For example, what data can you or Jon Ousterhout share about simple programs being faster than their complex equivalents? How do you define simple and complex?

                      By saying these things, you are being highly reductive and talking superficially about complex things, which makes me have zero trust in what’s being presented.

                      1. 2

                        That book is full of speculation and opinions.

                        The book is full of informed opinions. The guy has a few decades of programming under his belt including significant stuff, as well as hands-on teaching experience where he personally reviewed the code of his whole class over quite a few years. That gives him significant credibility in my book.

                        what data can you or Jon Ousterhout share about simple programs being faster than their complex equivalents?

                        Obviously I can’t speak for Ousterhout.

                        Personally, I have observed in my 15+years long career that the more complex programs tend to be slow. Don’t ask me why, they just are. And in one case I could even compare an old program with a rewritten version. The old program used threads, the new one a simple, naive, single threaded event loop. The new one was several times faster, on the same workload. So when I hear Ousterhout say that simpler programs are often faster, I’m like “duh, of course they are”.

                        How do you define simple and complex?

                        Oh that one’s easy: Source Lines of Code.

                        I’m not even kidding, SLoC is strongly correlated with pretty much every complexity metric out there. And for this one we have actual, honest to goodness scientific evidence. Of course we need to be careful: when a measure starts to become a goal, it ceases to be a good measure. See Code Golf. But if you stay honest with style and keeping your code readable and pretty enough to look at, SLoC is a remarkably good proxy for actual complexity.

                        By saying these things, you are being highly reductive and talking superficially about complex things

                        I was writing a comment on Lobsters, not a PhD thesis. Asserting that stuff is complicated is mostly a cheap way to sound smart, I’m not having it. I’d rather know where exactly you disagree with me, because honestly I’m not sure we even do.

                        1. 1

                          I disagree with everything that you said. You said that performance is easy, I don’t think it is. You said that simple programs are faster by default, I don’t think they are.

                          1. 1

                            I didn’t just say performance is easy. I said that I “strongly suspect” that reasonable performance (which I later defined as 10 times slower than the maximum possible), is not hard. And then I added a qualifier: simplicity is the hard part.

                            In my experience the simplest solutions are rarely the most obvious. They require thought, and in many cases a couple actual attempts to achieve (Ousterhout recommends trying out 2 or more designs before committing to one, and my experience in API design matches this advice). And in my opinion simplicity is a prerequisite for performance: once your program has blown complexity out of proportion, it naturally becomes much harder to make it faster.

                            You could say performance is hard because its prerequisite, simplicity, is hard. I would agree with that.

                            1. 1

                              Can you tell me what exactly is simple about the fast inverse square root algorithm?. I can’t think of a single optimization that doesn’t take advantage of some obscure, implementation-specific fact or set of facts, and exploits them in non-obvious ways.

                              Said another way, it’s obvious that the exact opposite of what you’re saying is true: simplicity is guaranteed to lead to worse performance, and complexity is the prerequisite for optimization.

                              1. 1

                                Can you tell me what exactly is simple about the fast inverse square root algorithm?

                                This is where you are out of topic. This algorithm falls squarely in the “hardcore optimisation” category, well beyond being simply performance aware and avoiding pessimizing your program for no reason. Watch the first few minutes of this video I’ve linked before, that explains the difference between optimisation and non-pessimisation.

                                (Besides, inverse square root is an easy case, because it’s a local trick that won’t affect the rest of the program. Many optimisations instead cross module boundaries, and make things even more complex. Blocking vs asynchronous for instance, at least in cases where language support is lacking.)

                                I can’t think of a single optimization that doesn’t take advantage of some obscure, implementation-specific fact or set of facts, and exploits them in non-obvious ways.

                                Well… watch the whole refterm lecture (the video I’ve linked is the first of the series), where Casey Muratori shows exactly such an example: he goes something like 1000 times faster than the Windows terminal of the time, using almost exclusively non-pessimisation techniques, and very little actual optimisation. And even the “hardcore” optimisation part was a straightforward application of SIMD that didn’t affect the rest of the program.

                                […] complexity is the prerequisite for optimization.

                                it’s more subtle than that. Yes, applying hardcore optimisation will make your program more complex. But to be able to do it in the first place, you need a program simple enough to be modified, and sufficiently void of needless waste that the optimisation will have a real impact.

                                Really, I highly recommend this refterm lecture: it’s informative, clear, concise, and fairly entertaining.

                                1. 1

                                  This explains everything. You, somehow, can listen to anything by Casey Muratori and take him seriously. That’s why we’ll be unable to get any common ground.

                                  1. 2

                                    I don’t know much about Muratori. Why don’t you take him seriously?

                                    1. 3

                                      He is a longtime game developer that has many rants and videos that I find to be very condescending. He also basically harassed a team at Microsoft when they didn’t immediately implement some performance improvements that he thought should be made to Windows terminal.

                                      Where he really lost me is, in watching one of his videos, he said that under no circumstances, in any software, in any way, is garbage collection an acceptable idea. And that if you just manage memory like he does, you also won’t have any memory safety issues, because memory safety isn’t actually hard.

                                      Basically, he has an Uncle Bob level of conviction, and doesn’t accept things outside of his bubble as valid. I can’t take someone serious who doesn’t acknowledge that there are legitimate tradeoffs to things.

                                      1. 1

                                        has many rants and videos that I find to be very condescending.

                                        Yes he has. The lecture I linked to is no such rant though. It’s polite, structured, and as informative as any keynote.

                                        Where he really lost me is, in watching one of his videos, he said that under no circumstances, in any software, in any way, is garbage collection an acceptable idea.

                                        I have video footage from his performance aware course (paywalled) showing that he most probably don’t believe that now. I mean he defended Python of all things. If you could provide a link to that video and a time stamp, I’m interested.

                                        He also basically harassed a team at Microsoft…

                                        Unless you’re privy to information outside of this particular GitHub thread, he did not. Not even “basically”. I’ve just re-read his entire comment stream from the thread (that’s the third time now), and he remained polite the entire way. Even his last comment (the one you linked) wasn’t too bad, though clearly frustrated.

                                        His followers may or may not have sent a number of messages, that may actually amount to harassment. But (i) I have seen no evidence of that happening and (ii) I haven’t seen any call to do so from Muratori. If you have and can provide links, I’m interested.


                                        That thread did go sideways when Muratori ended up explaining why terminal rendering ought to be simple and fast. In a fair amount of detail and politely so, though I understand many would take issue with his conclusion:

                                        What this code needs to do is extremely simple and it seems like it has been massively overcomplicated.

                                        (I personally have no problem with this statement. I once had to say to some poor colleague that he had “blown complexity out of proportion”. Which he did, I divided his code by more than 5. So it does not surprise me one bit that some dev may have overcomplicated crucial parts of the Windows terminal.)

                                        And sure enough, Dustin Howett did take issue:

                                        I believe what you’re doing is describing something that might be considered an entire doctoral research project in performant terminal emulation as “extremely simple” somewhat combatively. […]

                                        Setting the technical merits of your suggestion aside though: peppering your comments with clauses like “it’s that simple” or “extremely simple” and, somewhat unexpectedly “am I missing something?” can be read as impugning the reader. Some folks may be a little put off by your style here. I certainly am, but I am still trying to process exactly why that is.

                                        (I personally suspect the reason he was put of was not Muratori’s style, but his actual meaning. I certainly wouldn’t feel very good if someone publicly told me that my code is 1,000 times slower than it could be.)

                                        Muratori quit the thread on the spot:

                                        When we’re at the stage when something that can be implemented in a weekend is described as “a doctoral research project”, and then I am accused of “impugning the reader” for describing something as simple that is extremely simple, we’re done. Consider the bug report closed.


                                        Now you would think that maybe Muratori was wrong as a matter of fact. That it was just some arrogant game dev thinking game devs know better because they are better because shut-up-games-are-hard or whatever.

                                        Except he was right, and he proved it. With code. That actually ran 1,000 times faster than the Windows terminal. While supporting loads of edge cases (including arabic and multi-cell characters), likely more than the Windows terminal itself. And he wrote it in a few days.

                                        About a year later, the performance of the Windows terminal massively improved (can’t find the link, I recall it was reported on Reddit /r/programming).

                                        1. 2

                                          This took an unrelated tangent - I apologize for getting personal about Muratori. He actually does have a lot of good content, even if some of the more arrogant stuff gets under my skin.

                2. 23

                  People don’t expect better

                  This is one that I really only realized recently, but I think these days a lot of end users understand on some level that the software they use isn’t for them; it’s made to benefit a tech company, and the tech company is going to do the minimum possible to help the user that still gets them to hit their quarterly OKRs.

                  1. 4

                    Wow, that’s cynical, and yet it somehow rings true. Users are treated as a resource by so many companies nowadays that they start to expect it.

                    1. 3

                      This + that much of software isn’t built for users: it’s built for CIOs, CTOs, CEOs, VPs, etc. to pick and subject their employees to, whether it’s good or not.

                      1. 2

                        I think these days a lot of end users understand on some level that the software they use isn’t for them

                        Sometimes the corporate software we use feels like it’s more for your manager than it is for you. You’re a slave to the system, you’re not empowered by it.

                      2. 14

                        I pay a lot of attention to people using computers in my daily life: clerks, receptionists, administrators, etc. And they’re pretty universally frustrated with how slow and unreliable their computers are. They’ll complain for hours if you give them the chance.

                        This is my experience. Just talk to people in your daily life who have to use software at work. There is such a clear need for higher quality software, and more importantly software that cares more about humans. This is intangible, and goes beyond functional correctness. This is also why I think the chatGPT meme isn’t a threat at all, because there will always be an artistic quality to this kind of software.

                        Customers aren’t the clients

                        This is another huge one. It’s very common to sell to decision makers, and it’s very common for decision makers to have no shred of empathy for their constituents.

                        There’s just nothing that can remove my optimism that there is demand for quality in software, and that it’s worth investing in quality in all ways.

                        1. 7

                          The only way things ever change is if someone makes the first move, instead of blaming each other or waiting on the world to change. I can only control my own behavior, so I have to make the first move. I’m a developer, so I guess that means I have to voluntarily saddle myself with underpowered hardware, so I can feel the users’ pain, perhaps magnified. I’ve already put off upgrading my main PC; I’m currently using a Skylake laptop from 2016. But that’s still a quad-core i7-6700HQ with 16 GB of RAM, so maybe I need to go lower.

                          1. 9

                            I worked at Sun Microsystems in the early 1990s and I remember hearing that this was a policy on some teams that were building UI code. They were given low-end workstations so they would experience their UIs the same way end users would. Can’t say firsthand if it was actually true (I was working on low-level stuff) but it seemed like an interesting concept to me at the time.

                            1. 1

                              That is a smart idea and one heck of a smart manager! Sun was innovative in so many ways, this may very well be true.

                              1. 1

                                Facebook is reported to have had a similar concept, but with internet speed. On Tuesdays, they gave employees the possibility of experiencing their website as if they had a 2G connection. (source: https://www.businessinsider.com/facebook-2g-tuesdays-to-slow-employee-internet-speeds-down-2015-10)

                            2. 6

                              I’ve worked at places which (briefly) saw performance issues as existential threats, though IMO they handled it quite poorly— doing single “performance sprints” instead of thinking about fundamentally improving the process towards performance.

                              There is a solution to this! SRE is a methodology for fundamentally changing a company’s process toward more reliability and performance.

                              It starts with defining and deciding what these terms mean to the users of your product: “Do they need the document page to load in 100ms? 1 second?”, “How long are they willing to wait for the order to be confirmed?”, “How often can our image generation algorithm fail until they are not satisfied anymore?” These decisions are recorded using SLIs, SLOs and SLAs. Importantly, these are metrics.

                              By the end of this complicated, people-intensive process, the organization knows what its users expect, and whether it is currently delivering on these expectation. The next step is to have teams act according to these metrics. If the user is satisfied, work on new feature. If they are not, fix performance.

                              1. 3

                                Thank you for mentioning this! Too often, folks see SRE as only providing reliability. But they miss that, once reliability is established, we can switch to an “optimizing” mindset which delivers performance improvements without diminishing reliability or other committed metrics.

                              2. 5

                                The “Linux is Linux” mention just cracked me up. And I have to agree with it – I could teach macOS to my parents, but not Linux. I just gave up teaching anyone Linux desktop, its too “fragile” for a non-technical person’s daily use. Yes, there are equivalents, but none that are up to the mark. Heck, Google’s GSuite is way better than any word processor I’ve used on Linux. Enough of digressing though.

                                I liked the article, overall many points to ponder over. To the author: thanks for taking time to pen your thoughts.

                                1. 5

                                  My parents are using Linux, and neither of them is a technical person. They are much happier with LibreOffice than they were with MS Office, and MATE is an opt-out of MS (and Apple, for that matter) constantly changing the UI.

                                  1. 3

                                    Quote from the article:

                                    1. Someone’s gonna reply with how their grandmother uses Linux and it’s actually really easy! Last time I used Linux I had to learn how to cut power to my GPU because otherwise it was always on and dropping my laptop’s battery life to three hours ↩
                                    1. 1

                                      The macOS UI hasn’t changed substantially in two decades.

                                      1. 1

                                        More power to them! :-)

                                        I never tried MATE, though it seems promising. I just hope it does not become yet-another-attempt-at-UI, like so many other Linux-based UIs’.

                                        1. 3

                                          It’s a fork of GNOME2 made as a reaction to the GNOME3 attempt-at-UI. It had a pretty awkward initial development stage but now it works as well as GNOME2 did. I’m not saying that it’s a perfect DE for everyone, but it never does pointless redesigns and for me it does everything I want a DE to do, including things that most “modern” DEs no longer can do, like a fixed virtual desktop layout.

                                          1. 2

                                            Yes, the redesigns and the associated learning curve was one issue I had (hence I said “fragile”). It was as it the Linux DE was perpetually evolving. I will try out MATE. Thanks.

                                    2. 4

                                      This doesn’t quite agree with my personal experience. I pay a lot of attention to people using computers in my daily life: clerks, receptionists, administrators, etc. And they’re pretty universally frustrated with how slow and unreliable their computers are. They’ll complain for hours if you give them the chance.

                                      You quote something about polling users about what to work on next. Your experience is about something different. And the difference between the two is exactly the problem. When does someone ‘care’? When they say so or when they act in accordance with those words?

                                      People, including users, care about performance and reliability when you ask them whether they care about things in those categories. They will also complain about the lack of them. But they won’t prioritize them or vote for them with their wallet. Even users prioritize other things over speed in my experience.

                                      Also note that there is a whole class of features that is actually about performance: a slow app that does something you’d otherwise have to do by hand is still a performance improvement. A feature to do something in bulk saves a lot of time, even if the action is still slow. Etc.

                                      ‘Reliability’, like testing, has the additional problem it is mostly invisible in the end product when done well. “We never received any bug reports, so we can save some money on Q&A” literally happens.

                                      1. 5

                                        People, including users, care about performance and reliability when you ask them whether they care about things in those categories. They will also complain about the lack of them. But they won’t prioritize them or vote for them with their wallet. Even users prioritize other things over speed in my experience.

                                        This is something that there are established research methodologies to measure. Generally, the approach is to present a user study audience with a list of things that they can choose between, including features and prices, and see which they pick. It was used to define the energy efficiency labelling laws, which worked so well that the scheme had to be recalibrated because no one could sell B-rated appliances and no one could tell the difference between all of the A-rated ones on the market (which were a tiny fraction when the labels were introduced).

                                        Part of the problem, as the article says, is that there aren’t quantitative metrics for this that users can see before they buy a product. There are then lock-in effects. Maybe Windows users would pay $5 more for a faster version of Windows when they upgrade, but there isn’t a diverse market of Windows variants that they can pick from, there’s just Windows or some other OS and switching involves replacing other bits of the stack (and learning a new UI). On the other hand, OS X 10.6, which was explicitly advertised as not introducing new features, just going faster, sold very well.

                                        1. 3

                                          But they won’t prioritize them or vote for them with their wallet.

                                          It’s unreasonable to expect “clerks, receptionists, administrators, etc.” to leave their jobs because the software they are forced to use is not performant enough.

                                          1. 1

                                            The “won’t prioritize them” applies in those cases. “Voting with their wallet” was meant in the case of consumer software.

                                            1. 1

                                              If someone is forced to use a particular piece of software for their work, what exactly do you think they can or should do to prioritize performance and reliability?

                                              1. 1

                                                Indicate they prefer performance improvements over other changes when asked about their priorities. They rarely prioritize the performance improvements.

                                        2. 4

                                          Good take.

                                          I hope that there can be a bit more balance to this discussion; discourse tends to skew negative because it attracts people who want to vent and/or justify their own disengagement. That ends up framing the characteristics of quality/reliability as too expensive or quixotic. The result is a mind-virus that people share amongst themselves: “unit testing is too hard,” “quality is for FAANG-level engineers,” “we need to use as many third-party deps as we can because library authors are super geniuses and we aren’t.”

                                          It’s all so self-defeating and sad.

                                          1. 3

                                            Yes!! Great article.

                                            People don’t expect better

                                            I really am pulling my hair out because of this. People just accept everything these days, software being slow and buggy as hell. It is horrendous.

                                            1. 1

                                              Like Windows 11 is a slow, bloated mess of an operating system. But the average normie is stuck with it.1 Mac is too expensive and Linux is Linux.

                                              Maybe people do care about performance and reliability

                                              If you really care about it, pay for it.