1. 32
  1. 5

    I rather enjoyed this. Although it’s worth noting that (from what I recall) he’s in the Games industry, so many of these bits (eg: latency) might be less important for other fields.

    1. 8

      I believe he mentions this in another talk at cppconf years ago…“People like you are the reason that it takes 30 seconds to open Word”.

      (You in this case being the fellow he was addressing, not like you you.)

      People ignore the learnings of the game industry at their peril.

      1. 11

        The flip side of this is a comment entitled “People like you are the reason we never ship”.

        Knowing when and what to optimize, and what not to bother with, is also incredibly important. Even in games, they don’t optimize everything down to the absolute fastest screaming bare-metal performance, because they know which parts of the final product need that and which parts don’t. Trying to maximally optimize everything all the time is a recipe for wasting huge amounts of resources and never being able to actually ship the product.

        1. 6

          This is … specifically exactly what the OP is talking about. Please read the article.

          1. 6

            The top comment mentioned that some things might be less important outside game dev. Then someone replied with the “people like you are the reason it takes 30 seconds to open Word” and “people ignore the learnings of the game industry at their peril”. That’s not a constructive reply, and I was pointing out why it was not a constructive reply.

            Please read the comment chain you’re replying in.

            1. 6

              It’s a harsh reply, but it is constructive. Perhaps not in isolation, but this comment thread gave enough context:

              • @friendlysock mentioned learning from the games industry. The game industry does ship.
              • Back in 2014, Mike Acton followed up with “We worry about our resources, we have to get shit out, we have to build this stuff on time.” (Click on the link given by @vamolessa)

              People like these aren’t the reason why we never ship.


              Now as a matter of fact, some very popular programs do take forever to boot. Photoshop was timed at about 7 seconds by Jonathan Blow around 2017. About as long as it took 20 years prior on the much slower computers of the 20th century. Even though Photoshop does the same thing in both cases (display the image it was asked to open). You could argue that 2017-Photoshop has so many more features than 1997-Photoshop, but none of those feature matter when you’re just displaying the image.

              Oh and Photoshop’s menu: click on it, wait a full second. Click on it again… wait again. We could have forgiven them if it was fast the second time around (suggesting a cache mechanism or similar), but no, displaying a menu with less than 10 items is just that slow. And again, not justify by any expanded feature set. At that point we didn’t even get to use any of Photoshop’s features, beyond just displaying the image and opening up a menu.

              Long story short, Photoshop got much slower than it used to be in the span of 20 years, with no good reason. That is the hallmark of the people who did not even think about performance. Photoshop being a GUI program, there are performance constraints to worry about:

              • Startup times that exceed 200ms are noticeable. Above 1s the user is starting to wait.
              • Opening up a menu feels sluggish when it exceeds 100ms.
              • Any animation (even a drag animation) needs at least 30FPS to feel smooth enough. 60FPS if we can. Even 120FPS makes a difference in monitors that support it.

              Those performance goals may be easy to attain, but they do represent limits any boring GUI program dev should be mindful of.

              1. 5

                It may be harsh but it’s not constructive. In fact, it borders on being content-free. It’s effectively the same sort of grousing about “kids these days” that we see in so many other areas – back in my day, we cared about a job well done performance! We respected our elders cache coherency! And so on.

                The reality is that every generation of programmers is, for lack of a better term, shit on by their predecessors for slowing down all the software and shits on their successors for slowing down all the software.

                Modern software is slow to start up? Well, the “show a screenshot of your last state to hide that you’re not ready for user interaction yet” trick is popular nowadays, and is even built in to some operating systems now, but it was originally invented back in the good old days when people allegedly cared about performance! Why didn’t they just write faster software so they wouldn’t need those tricks? And if they didn’t and got away with it, where’s the justification for suddenly declaring it unacceptable now?

                And on the topic of games specifically, the industry is infamous for having trouble shipping (though not because of ruthless focus on software performance). Games are known for everything from massive “crunch” time burning out developers to needing multi-gigabyte patches downloaded on their release day just to be somewhat playable; this is not the sign of a disciplined industry in which quality and rigor are infused at every level. So if I’m going to get any “learnings”, it’s not going to be from game dev as a positive model to emulate.

                So when we get right down to it, dismissive one-liners about performance are great for getting drive-by upvotes and terrible for understanding the complex, grimy reality of both the world and actual software. Which is why I replied as I initially did, pointing out that “performance” is a more complicated thing than the one-liner pretends it is.

                1. 3

                  In isolation, I agree that “get off my lawn” is not constructive. But you make another point that I feel must be addressed:

                  The reality is that every generation of programmers is, for lack of a better term, shit on by their predecessors for slowing down all the software and shits on their successors for slowing down all the software.

                  You’re right, they absolutely do. As do I. Thing is, software is slowing down. Sometimes for good reasons (prettier renderings, advanced searches…), and sometimes for no reason at all.

                  I gave the Photoshop example, but I have another: my phone slowed down over the years to near unusable levels, yet I installed nothing since I first set it up, and I didn’t used or noticed any new functionality or eye candy… all while making sure the disk has at least 1.5GB of spare memory. My phone would be more valuable today if instead of pushing their crappy updates they stopped at fixing the security vulnerabilities.


                  “performance” is a more complicated thing than the one-liner pretends it is.

                  I’m not sure the one liner pretends such a thing. Let’s look back at what Mike Acton was actually responding to:

                  As I was listening to your talk, I realise that you are working in a different kind of environment than I work in. You are working in a very time constrained, hardware constrained environment.
                  [elided rant]
                  Our company’s primary constraint are engineering resources. We don’t worry so much about the time, because it’s all user interface stuff. We worry about how long it takes a programmer to develop a piece of code in a given length of time, so we don’t care about that stuff. If we can write a map in one sentence and get the value out, we don’t care really how long it takes, so long —

                  Then Mike Acton interrupted with

                  Okay, great, you don’t care how long it takes. Great. But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.

                  Most people who will say to you that performance is not that important in such and such settings, to avoid thinking of performance. They don’t measure, plan anything, or even eyeball anything. Not how much data they’re supposed to process, not what their time constraints are, not what the capabilities of the hardware they’ll be shipping in are.

                  Who is oversimplifying performance here? The guy who says it’s important, or the guy who acts at work as if computers had infinite speed? The fact is, performance always matter to some extent:

                  • There always is an upper bound to acceptable latency.
                  • There always is a lower bound to acceptable throughput.
                  • There always will be consequences for exceeding those bounds.

                  The variables are how hard it is to respect those acceptable bounds, and the consequences of exceeding them. Sometimes being fast enough is very easy, and even if we’re not it’s no big deal. Sometimes the bounds are very tight and missing them kill people. Just pretending performance isn’t important to your niche doesn’t help you pinpoint where you are in that spectrum. GUI performance for instance is more important and harder to meet than most people give it credit for. Especially for popular programs, where waiting 5 seconds for something to boot means that a million users would lose a collective amount of 35 work weeks per boot.

                  1. 7

                    “But there is some slow software today” is not really a counterargument here. Sure, there’s software that’s slow today. You know what? There was software that was slow twenty years ago when I started out as a programmer. There was software that was slow before that. “Software is slow” is not some sort of unique new unprecedented problem.

                    Meanwhile: I bet Mike Acton doesn’t work exclusively in hand-rolled assembly. I bet he probably uses languages that are higher-level than that. I bet he probably uses tools that automate things in ways that aren’t the best possible way he could have come up with manually. And in so doing he’s trading off some performance for some programmer convenience. Can I then retort to Mike Acton that people like him are the reason some app he didn’t even work on is slow? Because when we reflect on this, even the context of the quote becomes fundamentally dishonest – we all know that he accepts some kind of performance-versus-developer-convenience tradeoffs somewhere. Maybe not the same ones accepted by the person he was trying to dunk on, but I guarantee there are some. But haggling over which tradeoffs are acceptable doesn’t let him claim the moral high ground, so he has to dunk on the idea of the tradeoff itself.

                    So again: “People like you are the reason that it takes 30 seconds to open Word” is not a useful statement. It’s intended solely as a conversation-ending bludgeon to make the speaker look good and the interlocutor look wrong. There’s no nuance in it. There’s no acknowledgment of complexity or tradeoffs or real-world use cases. Ironically, in that sense it almost certainly violates some of his own “expectations” for “professionals”. And as I’ve explained, it’s inherently dishonest!

                    In other words: there is nothing constructive about it. it was not good when first said, and it was still not good when repeated in this thread. It will not become good if you keep repeating it and trying to justify it on grounds that slow software exists, because slow software has always existed and there never was a magical age of Real Programmers® who Cared About Performance™.

                    1. 3

                      It’s intended solely as a conversation-ending bludgeon to make the speaker look good and the interlocutor look wrong. There’s no nuance in it.

                      Correct.

                      In other words: there is nothing constructive about it. it was not good when first said,

                      It was useful the first time it was said.

                      At that time, Mike Acton just spent over an hour explaining why performance matters, why it is not a niche concern, and how to achieve it effectively. And then someone comes in explaining that the speaker comes from a very special niche, whose lessons don’t apply elsewhere, implying that almost everyone could safely ignore the whole keynote. All that packaged in a long winded statement that wasn’t even a question.

                      We might feel sorry for the poor fellow, but he was undermining the core point of the keynote with weak arguments. Mike Acton was right to end it right then and there.


                      Funny thing is, when I first stumbled upon that (highly recommended by the way) keynote, I was working on a little GUI program that replayed radar data (planes around an airport kind of radar). We had a potentially big file (up to a Gigabyte) that contained the data, and the software is supposed to help us navigate it. Unfortunately loading the whole thing was slow, so instead we were displaying it at a given point, then we re-read and re-parse and re-display everything starting to that point.

                      Except it wasn’t that slow, actually. The file format was simple (I wrote the parser myself), computers of the time already had enough RAM to retain the whole file, and some pre-processing would have tremendously sped up getting from one point to another, to the point where the user could probably grab a time slider and navigate at will, and see the position of each plane update at 60 FPS. Like video games, only easier (we didn’t have that much data to display).

                      The reason we didn’t do that? We never measured anything that mattered. What few measurements we did involved Qt refreshing the whole screen in real time, on the main thread, slowing everything down. They concluded loading the file, even without Qt’s display, would be too slow. I tried to raise the issue, but their mind was set. They would accept sluggish navigation because they believed, without having actually measured it, that loading was too slow.

                      So I watched that keynote while I was having performance problems at work on a GUI program. The keynote gives me clues and solutions to where I should look, what I should measure. Gave me some hope that, considering how fast computers were at the time (2015), I should indeed expect more performance than my colleagues seem to hope for. And then there’s this dude telling the speaker that his keynote doesn’t apply to my work?

                      Let’s just say I don’t remember having much empathy for the poor dude.

                      1. 5

                        We might feel sorry for the poor fellow, but he was undermining the core point of the keynote with weak arguments. Mike Acton was right to end it right then and there.

                        I dunno, all these things that are supposed to make me like and respect Mike Acton are having the opposite effect. For his sake I hope it’s just that you’re doing a terrible job of making the case for him.

                        Your own quote says the person was pointing out that game dev is a field that’s more time- and hardware-constrained than other areas of programming. Which is true! The tradeoffs that make sense for other fields of programming aren’t guaranteed to make sense in game dev. But the requirements that game devs operate under – or, let’s be brutally honest, the requirements game devs like to publicly claim they operate under – aren’t guaranteed to make sense for other fields of programming.

                        Continuing the theme of brutal honesty: this is something I see way too much of and have, at this stage in my career, zero interest in putting up with, namely: people who generalize from their case to all cases. So the requirements they have are the requirements everyone must have. The background and skills they need are the background and skills everybody needs. Tradeoffs that are unacceptable for them are unacceptable for everyone. Things they have to know and worry about are things everybody has to know and worry about.

                        Except… no. No, actually it’s the case that different fields of programming are different. A tradeoff that doesn’t make sense for him may well make a lot of sense for me. A skill or technique that’s indispensable to him may be utterly useless to me. And my entire beef with him, and people like him, is the inability to recognize and accept that. If he showed up to a technical session with one of my teams demanding to know things like the exact memory layout of all our data, he wouldn’t be laughed out of the room (since my teams don’t roll that way), but he might be privately pulled aside to have some things explained to him.

                        And that is why this allegedly amazing dunk about performance, coupled with his “expectations” for “professionals”, don’t impress me at all. They just make him seem full of himself and ignorant of much of anything beyond his own particular niche.

                        1. 4

                          I dunno, all these things that are supposed to make me like and respect Mike Acton

                          I was defending this particular statement more than Mike Acton himself. I could try and defend his keynote as a whole, but… just watch it.

                          people who generalize from their case to all cases.

                          I see why you’d see that pattern here. Still, performance is not a niche concern. Most of us care about performance at one point or another, and I saw this throughout my career:

                          • First big job, I’m working on some GIS software written in C++. The thing is slow, the code has an inheritance hierarchy 9 level deep, and some of the processing I wrote myself had noticeable delays (it was simple image processing.
                          • Second job was the radar thing I mentioned in the grandparent comment.
                          • Fourth job was a real time testing environment. Constraints weren’t tight, but I still needed to not to pessimize everything.
                          • The performance of the crypto library I wrote is an important selling point.
                          • Sixth job was an ADAS system for a car. Embedded and all, though the embedded computer was still pretty powerful. Still, map matching (from GPS coordinates to a point on a given road) was a bottleneck.
                          • Seventh job was a Qt GUI program displaying some data from a monitoring box. I quickly noticed that some ways of doing stuff would render the program sluggish if I didn’t pay attention, and I hardly had any data.
                          • Overall, half of my gigs compiled slowly enough that it was annoying.

                          Now a front-end web dev might not care that much about the CPU cache (that with the browser being such tall tower of abstractions), but they certainly would care about response time or network latency, or bandwidth. They need to (and do!) measure that stuff to get a decently performing website out of the door. I would guess most of the details of Mike Acton’s talk wouldn’t apply to them, but the philosophy of it (pay attention to the hardware you’re working with) apply even when your bottleneck is CISCO router you don’t even control.

                          his “expectations” for “professionals”, don’t impress me at all.

                          That list is very long, and some of the items are definitely more important than others depending on one’s working environment. Still, there aren’t that many criteria in there I would be comfortable not meeting.

                          1. 1

                            Have you watched his talks, out of curiosity?

                            What sort of software do you write? Django/Python, right?

                            1. 2

                              If you’re going to do “until you’ve watched all his talks and read all his material you can’t criticize it”, I’m going to require the same standard of you: that you read everything I’ve published/watch every talk I’ve given over the course of my career before disagreeing with me.

                              Meanwhile, I’ve explained at length why I think it was a bad thing to say here and why I think the the people defending it here are wrong. If you have nothing more to contribute to that, I’ll move on.

                              1. 1

                                I don’t know if you actually understand and can articulate his position and claims (and judging by your posts, I suspect you haven’t), which changes how to discuss this with you.

                                (You don’t have to agree with them, by any stretch! I just would find it helpful if I know that you have familiarity with them.)

                                I also have no idea if you have any relevant experience with programming in this style–if you’ve done work where performance as perceived by the end user has never been a problem or worked in languages where you simply can’t do a lot of optimization or benchmarking, that’s a different discussion to have. I’m trying to avoid talking past each other.

                                If you want to boil this down to “mean bully programmers chest-thumping about MAH REAL PROGRAMMING AND MUH PERFORMANCE and that’s not something that matters in civilized society, the goons”, there’s not much we can talk about.

                                1. 1

                                  I’ve explained my position at length. If you want to boil it down to whatever this line in your comment is:

                                  mean bully programmers chest-thumping about MAH REAL PROGRAMMING AND MUH PERFORMANCE and that’s not something that matters in civilized society, the goons

                                  then I agree there’s not much we can talk about.

                      2. 2

                        Who’s oversimplifying? I’d say the guy who cares more about performance than the paying customers do. The guy who says he’d still prioritize it in a business development position.

                        Maybe Word takes 30 seconds to open: the customers don’t seem to care. Sorry that bothers him.

                        The reason a game developer might care a lot about performance is, unsurprisingly, because that’s something their paying customers care a lot about.

                        It’s still a pretty good list.

                        1. 2

                          Who’s oversimplifying? I’d say the guy who cares more about performance than the paying customers do.

                          That guy is certainly not Mike Acton. The guy ships games that must sell at some point.

                          Maybe Word takes 30 seconds to open: the customers don’t seem to care.

                          That’s part of it. A bigger part is that customers, even more so than programmers, have no clue what performance they ought to expect. The biggest part though, I think, is just that customers often have no choice. To little competition, if at all depending on your niche.

                          Except for games. Lots of games out there, lots of competition. Sluggish games go out of business more often than adequately performing ones. Also, there are lots of examples of fast games, so gamers actually know what they can demand.

                      3. 1

                        And if they didn’t and got away with it, where’s the justification for suddenly declaring it unacceptable now?

                        Maybe the justification is the fact that computers are faster now, so all of the excuses that software used to have for being slow (at most tasks, at least) are now gone. This commenter certainly thinks so.

                        To be honest, the grousing about the performance of software these days has taken its toll on my morale, and possibly productivity, this year. The main product that I’ve developed this year is based on Electron. Need I say more? We all know how much Electron is hated by message board commenters. And yes, the startup time of my specific Electron app is noticeably sluggish. I reluctantly chose it because I knew that using it would increase my chances of shipping in a timely manner. But then, the feeling that I was inflicting another piece of crappy software on the world may have hurt my productivity, especially before the public beta when we started getting responses from real users. As far as I can recall, just one user complained about the product making their older computer hot (and even that user went on to be happy with the product in general), and nobody specifically griped about it using Electron. And yet, maybe the message board commenters are right; maybe my users just don’t know what they should expect, and I really am inflicting more crap software on people. Maybe the world’s right to be free of crap software trumps my need to ship a product (and by extension, to make a living as a developer of such a product). The one thing that reassures me that I’ve done the right thing is that this software solves a problem that, as far as I and my cofounder know, no other software has solved. So I can reassure myself that it’s not just about making money.

                        1. 3

                          Maybe the world’s right to be free of crap software trumps my need to ship a product (and by extension, to make a living as a developer of such a product).

                          On the other hand, maybe your client’s need to have software is more important than your need to give them fast software. If you don’t ship in a timely manner, then your users don’t have any software that solves their needs.

                    2. 4

                      “people ignore the learnings of the game industry at their peril”

                      The reason I mentioned this is that the games industry has learned a lot that the rest of tech should probably be paying attention to:

                      1. Games are written to be interacted with, for multiple hours, by people who are solely focused on the game. That means that any UX issues are going to be picked up on, any crashes are going to be experienced, any issues will be felt by the consumer. This software doesn’t respond easily to “just refresh the page”.
                      2. Games have to run on a range of genuinely different platforms and operating systems. “We just need a polyfill” don’t cut it when you’re targeting x86, Xenon, and Cell.
                      3. Games have hard scheduling deadlines because of marketing (e.g., “ship by Christmas or don’t come back here”).
                      4. Games have done a lot of exploration around how reusable code can be, with engines like Unreal and Unity on the extreme end. They’ve been doing this for decades (going back to, say, old id licensing).
                      5. Games in a lot of ways pioneered distributed and remote development, going back to both mod teams and then indie teams with contractors.
                      6. Game code in a lot of ways contains both the most pessimistic workloads for computers as well as complicated near-ideal numerical workloads. Programming and tuning in this regime is a whole thing.
                      7. AAA games (and smaller studios as well) have a really clear demarcation of programmers/content folks/tooling folks that can teach us a lot about how to manage large multidisciplinary projects and empower teams.
                      8. Some of the first widely-deployed VMs were out of the games industry: Quake, Scumm, Z-machine.
                      9. Some things like React’s rendering model were basically already old-hat by near a decade in games.

                      I didn’t mean that as a throwaway comment: there is a wealth of untapped knowledge and experience from that industry and its veterans, and I honestly cringe at the failure to learn from its suffering we see elsewhere in tech.

                      I’m sure there’s stuff the games industry can learn as well–say, source control outside of the world of Perforce–and more generally people should just look around and see what other wisdom exists outside of their bubble.

                      1. 6

                        Games also usually ship and then are never meaningfully updated again. That makes them unlike most other software produced in industry. Maintenance matters.

                        1. 1

                          I believe this is flatly incorrect. You’ll see updates tail off usually after a few years, but there is very much a culture of patching.

                          Look for changelogs for Halo, Counter-Strike, Minecraft, Natural Selection (going back like a decade), Tremulous…even porn games.

                          This was perhaps true back with cartridge games, but patches and updates have been a semi-regular thing since the mid 00s.

                          1. 2

                            I knew someone was gonna do this, call out those superminority of games that established longevity, and try to claim it as a refutation of my point. I knew it.

                            Among all games produced, what percentage do you think are updated on any kind of regular cadence? 1%? 10%? 50%? Among those games, how do you think their release cadence compares to typical software in industry? Just as fast? Half as fast? Twice as fast?

                            I don’t have data on the precise answers to those questions, but they’re in the neighborhood of 1% and 100-1000x less often, respectively. I can’t remember the last org I’ve worked for which didn’t deploy new versions multiple times per day. The domains are fundamentally incomparable.

                            1. 1

                              We also only play 0.1% of all games or something like that. There are so many games out there that just fall flat with nobody playing them. Or so few people. So we need a way to count:

                              • Do we give the same weight to each game?
                              • Do we count by how much effort it took to write?
                              • Do we count by number of players?

                              My point is, there is a huge difference between “games” and “relevant games” (for various definitions of “relevant”). And there’s a good chance we should only care about relevant games. Meaning games we actually play. There’ll be a bigger proportion of long standing games in that restricted space. Or we could go by studio, and only pick those studios who survived enough years, or did a certain number of game before going out of business. The effect of this selection bias should be similar.

                              The whole point of this sub-thread is to learn the lessons of the video games industry. Of course this is not about learning the lessons of crappy (or unlucky) studios that went out of business before they got to release their first game. You want to learn from the cream of the top, the successful ones. Whatever they are doing is more likely to work.

                              I can’t remember the last org I’ve worked for which didn’t deploy new versions multiple times per day.

                              I can’t remember the last org I’ve worked for which deployed new versions faster than twice a month. I don’t work in games, but I did touch various domains. Just not the web.

                              1. 3

                                My read on “the lessons of the video game industry” hyped by the guy in the OP and reinforced by posters in this thread is that they can broadly be described as performance optimizations.

                                I think you can divide the set of all possible performance optimizations into two broad categories, let’s call them “not being stupid” and “being smart” maybe.

                                Not being stupid means if you need to get 1000 records from an API, you make 1 batch request, and not 1000 individual requests. Or maybe using assembly optimized functions in your language’s standard library to find substrings in a string, rather than walking each character manually. Every program benefits from not being stupid. It brings substantial benefits, and generally has few to no costs, in terms of coherence or maintenance or whatever. No matter how often your code churns, or how often you deploy, not being stupid is probably going to pay off.

                                I don’t think “the lessons of the video game industry” refer to this class of performance optimization. Every time I read about cool performance stuff in video games, it’s stuff like the inverse square root function in Quake. This is a “being smart” optimization. This stuff can be super powerful, and can bring huge benefits. But! It also almost always carries enormous costs, too. It is definitionally harder to understand, and therefore maintain, versus the “not smart” alternative. Those costs are subtle, long-lived, difficult to measure, and tend to create pathological downstream consequences which are practically impossible to unwind. Multiply those costs over an entire code base, a team of engineers, the release cadence required to satisfy business needs, over however long the software is expected to remain in service to deliver value, and the ROI gets negative really quickly.

                                These “being smart” optimizations are in direct tension with many other critical properties of software engineering in the large. And long-lived, highly-played, frequently-patched games suffer from that tension, quite visibly! I play PUBG, which is actually quite ancient and not particularly performant, but nevertheless gets patches every month or so. That’s glacial compared to most industries. And, even at that glacial pace, when they release a patch — which is usually 10–50GB in size (!) — they literally shut down the entire infrastructure for the game, every server, in the entire world, for 8–12 hours. You can’t play the game at all during an upgrade. That’s the price you pay if you want to try to straddle the boundary we’re discussing.

                                Anyway. Just some thoughts.

                                1. 1

                                  I think you can divide the set of all possible performance optimizations into two broad categories, let’s call them “not being stupid” and “being smart” maybe.

                                  That’s a good first order approximation. Casey Muratory did a similar classification using different names (not being stupid is called “Non Pessimization”, being smart is called “Optimisation”). He has a third category called “fake optimisation” but we can ignore that here.

                                  I don’t think “the lessons of the video game industry” refer to this class of performance optimization.

                                  It really does.

                                  If you took some time to watch Mike Acton’s talk, you would have noted that he’s also about not being stupid. One of the most important things he mentions is the memory hierarchy, and its enormous effect on performance. Among other things he notes that having a boolean in a structure is generally pretty stupid, because you often end up loading an entire cache line just to get one single bit of information. Ideally you’d want to “not be stupid” about memory loads, and make sure that when you load a cache line, it’s packed full of useful information your comparatively blazing fast CPU can process right away.

                                  Many people would put the “pay attention to your memory layout” to the being clever category, but personally I would disagree with that assessment. Not knowing the constraints of your environment is not an excuse for engineers of any kind, and the hardware we use is a pretty important set of constraints.

                                  Guys like Casey Muratori and Mike Acton actually fairly rarely get to be smart. They just do the “not being stupid” part until it’s not fast enough. When that happen they measure stuff, find the cause, optimise it… the standard “measure before you optimise”.

                                  Every time I read about cool performance stuff in video games, it’s stuff like the inverse square root function in Quake.

                                  That’s an obvious selection bias, where the most impressive stuff gets pushed to the front, until you get the impression that it’s all there is. John Carmack doesn’t do that kind of stuff very often. First because he tend to optimise at the system level, but also because he also chose the simple stuff over the ultra-efficient stuff when it was good enough. (Source: Jonathan Blow recalling some criticism he voiced over level loading or something, don’t remember which video. He noted that his criticism was wrong, for the exact reasons you cited: the complexity of the smart approach has its own costs.)


                                  Those PUBG updates are a bummer. However, recall what @ friendlysock said here:

                                  My point was there is a lot to learn from the games industry. That doesn’t mean blindly aping practices or tech, that doesn’t mean they’ve discovered the One True Way of doing anything–in fact, a lot of their practices like crunch are abhorrent.

                                  But…the industry is a pressure cooker for software engineering and has created a ton of data, both examples and counterexamples, about what seems to work and what doesn’t. There is also for many studios a culture of public post-mortems, sharing what they’ve learned about during their development.

                                  Of course it’s not all rainbows and unicorns.

                                  1. 3

                                    I don’t think “the lessons of the video game industry” refer to this class of performance optimization.

                                    It really does.

                                    When you define some category set (software industries) and then isolate one element of that set (the video game industry) the point is to highlight the unique properties of that element. But “not being stupid” optimizations are — nearly by definition — equally useful in all software domains, and you can find them, in equal measure, everywhere. The unique thing about games in the context of performance is precisely the clever stuff, the stuff that’s architecture-specific, the stuff that exploits all of the exploitable stuff, and deals with the impact of the costs.

                                    At least, this was my perspective when I responded. YMMV.

                                    One of the most important things he mentions is the memory hierarchy, and its enormous effect on performance. Among other things he notes that having a boolean in a structure is generally pretty stupid, because you often end up loading an entire cache line just to get one single bit of information. Ideally you’d want to “not be stupid” about memory loads, and make sure that when you load a cache line, it’s packed full of useful information your comparatively blazing fast CPU can process right away.

                                    There is a performance threshold beyond which the efficiency of cache lines is meaningful. And I agree that there is a large class of software for which this is an important detail to be aware of. But that class of software is still a tiny minority of all software produced, and it would be a disaster to incept the notion that a performance pathology, under certain architectures, in certain narrow circumstances, is sufficient motivation to avoid using a primitive type in a primitive language construct in general.

                                    If I’m hired to work on some line-of-business software written as a web service with a p99 latency SLO of 10ms, then, in general, the things that impact performance are several layers of abstraction above cache line performance.

                                    If I see some code that models a truly boolean bit of state as a literal bit, packed into a uint8 with 5 other bits of unrelated state (or whatever)? Absent a benchmark or some sort of evidence that this optimization is important, that’s actively a problem. If I ever touch that code I’m refactoring it as a first step.

                                    Many people would put the “pay attention to your memory layout” to the being clever category, but personally I would disagree with that assessment. Not knowing the constraints of your environment is not an excuse for engineers of any kind, and the hardware we use is a pretty important set of constraints.

                                    Memory layout, allocations, bytes on the wire, CPU cycles spent encoding and decoding data types, binary size, infinitely many other metrics that impact performance — all of them carry benefits and incur costs, and all of them are, or should be, managed based on an engineering calculus that weighs both costs and benefits appropriately. There are plenty of circumstances where programmers should think about and optimize memory layout of their types, where the benefits outweigh the costs. There are also plenty of circumstances where the costs outweigh the benefits.

                                    (Tons of code, maybe even most code, is written without any knowledge at all of the hardware on which it will run!)

                                    1. 1

                                      But “not being stupid” optimizations are — nearly by definition — equally useful in all software domains, and you can find them, in equal measure, everywhere.

                                      Equally useful everywhere, yes. That’s why Mike Acton’s talk applies pretty much everywhere. I’m less sure you can find them in equal measure everywhere though. And I know for a fact that the lack of stupidity is not evenly distributed. Some people think Mike Acton’s talk doesn’t apply to them, and Photoshop still take too long to boot for instance.

                                      There is a performance threshold beyond which the efficiency of cache lines is meaningful.

                                      Note that this performance threshold is much lower than the performance threshold beyond which compiler optimisations are relevant. Another point of Acton’s talk was that 90% of the performance of your program comes from good memory access patterns. The compiler can only optimise the remaining 10%. Making this relevant for a relatively large class of software.

                                      Heck I have a recent example from my own experience: see this code ? Nicely formatted with Pygments so I could apply some CSS pretty colours. That thing is so slow that it takes more than a second per page of code. Very annoying, I have to turn it off when doing quick iterations. Sure I could have set up an incremental build to begin with, but come on: the thing is little more than a parser, what could possibly justify it being so slow? But that illustrates my point: even this little too, used on relatively little code, has performance requirements.

                                      Absent a benchmark or some sort of evidence that this optimization is important, that’s actively a problem.

                                      Good thing Acton insisted that you should measure first.

                                      Memory layout, allocations, bytes on the wire, CPU cycles spent encoding and decoding data types, binary size, infinitely many other metrics that impact performance — all of them carry benefits and incur costs, and all of them are, or should be, managed based on an engineering calculus that weighs both costs and benefits appropriately.

                                      Yes. Acton pretty much said as much.

                                      (Tons of code, maybe even most code, is written without any knowledge at all of the hardware on which it will run!)

                                      The range of hardware we actually target remains finite. Ignoring that range doesn’t help much.

                        2. 4

                          Also, games are mostly bought by the same person who uses them. Entreprise software is often inflicted on users by higher-up deciders who don’t care about the actual experience of using it.

                          1. 2

                            There is a danger in point 1. Games are meant to be fun, not productive. An optimal game UI for efficiency would have a single button to start the game, it would then show the end of game sequence: congratulations, you have completed the game. Games are all about taking a long path to achieve something, productive UIs are the opposite of this.

                            Given how many games use WINE for porting, I’m not sure I buy the second argument.

                            Duke Nukem Forever? There are a lot of high-profile examples of games with multi-year delays. The E3 reporting every year has a load of articles about games that miss their deadlines.

                            Unity and Unreal are much younger than a lot of the codebases that I’ve worked on in non-game contexts. Games did not invent reusable libraries and were some of the later adopters. The Unreal engine exists at all because it stared as a completely new implementation of an FPS engine rather than licensing an existing one.

                            Distributed communities for open source development have been the norm since the era when a large team for game development was 6 people. Game companies have largely followed industry trends here, not led them.

                            Visual Basic and various Pascal compilers were shipping interpreted and JIT’d VM code from the ‘80s. Most of the game VMs learned from these, not the other way around.

                            1. 3

                              Given how many games use WINE for porting, I’m not sure I buy the second argument.

                              Today, most consoles are some variant of x64. This was emphatically not the case 10 years ago, and definitely not the case 20. MIPS, PowerPC, x86, Cell (which is PPC with helper friends), all kinds of weird shit. Even inside x86, the 386/486SX/486DX/Pentium issues–setting aside all the crazy EGA/CGA/VGA weirdness–it was a total jungle. Even today, the ARM/x64 split for mobile game ports vs desktop games is still a thing.

                              Duke Nukem Forever?

                              One of the outliers that proves the general rule that for most games a schedule slip is a Big Deal.

                              The Unreal engine exists at all because it stared as a completely new implementation of an FPS engine rather than licensing an existing one.

                              At the time the two competitors were Carmack’s Quake engine and Silverman’s Build engine. Many people did license those engines (and make great games!). Many people rolled their own (for example, Bungie and Looking Glass/Irrational). We learned a whole lot from the experiences of both camps.

                              Also, Unreal dates back to 1995. There are plenty of old codebases, but that’s one that’s still showing it’s age. Quake code of a similar vintage is still kicking around.

                              Game companies have largely followed industry trends here, not led them.

                              Game companies have scaled all the way up to AAA studios, looking a lot like big software houses, to garage-band teams sharing a dorm room, to indie teams coordinating lots of developers on multiple continents both for mods and for paid work due to budget. The point is, there’s a whole lot of information out there about what worked and didn’t work.

                              Most of the game VMs learned from these, not the other way around.

                              I didn’t claim otherwise. My point is that for widely-deployed VMs, you can learn a lot from what worked and didn’t work over in games. Id engines, for example, switched from no VM (Doom/Wolf3d) to VM (Quake) to no VM (Quake 2) to hybrid (Quake 3). There is interesting knowledge to be had by seeing why that choice was made.


                              I don’t know what everybody’s deal out here is, but this whole thread has just been remarkably disappointing to read. My point was there is a lot to learn from the games industry. That doesn’t mean blindly aping practices or tech, that doesn’t mean they’ve discovered the One True Way of doing anything–in fact, a lot of their practices like crunch are abhorrent.

                              But…the industry is a pressure cooker for software engineering and has created a ton of data, both examples and counterexamples, about what seems to work and what doesn’t. There is also for many studios a culture of public post-mortems, sharing what they’ve learned about during their development.

                              As per my original post, we ignore that experience at our peril.

                              1. 2

                                I don’t know what everybody’s deal out here is, but this whole thread has just been remarkably disappointing to read. My point was there is a lot to learn from the games industry. That doesn’t mean blindly aping practices or tech, that doesn’t mean they’ve discovered the One True Way of doing anything–in fact, a lot of their practices like crunch are abhorrent.

                                I suspect that a lot of the negativity comes from the fact that you are making a load of claims about the games industry as if those things were unique to the games industry. Every single thing that you’ve listed is something that I’ve seen, at scale, in non-games software companies, often including things that they’ve been doing since the late ’80s, and including companies that range from one person and a few of his friends up to trillion-dollar enterprises. Your post comes across as saying that everyone should learn from you while simultaneously indicating that you are unwilling to learn from anyone else.

                                1. 1

                                  Part of the cause for my disappointment is exactly due to your observation: at no point did I say we don’t have lots to learn from other parts of software–I just said that gamedev has a lot to offer and gave examples I figured would substantiate that claim for the skeptical.

                                  1. 2

                                    The examples that you give are all things that the rest of the rest of the industry has done well for decades. By framing them as things that the game developer ecosystem can teach everyone else, you are implicitly framing your argument as if they were things he rest of the industry needs to learn. Worse, dome of your examples are things where the game industry was one of the later adopters. This comes across as dismissive of the experiences and skills of others.

                              2. 1

                                Games are all about taking a long path to achieve something, productive UIs are the opposite of this.

                                Obvious counter-example: Factorio. It’s a logistics optimisation game where you construct an ever growing factory, and there is a lot of stuff to do and automate. A lot of thought has been put into making the user interface as productive as possible (within the constraints of the game world, which I admit are artificial), and from my personal experience it made a big difference.

                        3. 1

                          “People like you are the reason we never ship”.

                          As pointed out elsewhere, the games industry has a great many failings but not shipping is very seldom one of them.

                        4. 4

                          That would be the great “Data-Oriented Design and C++” talk at CppCon14 https://youtu.be/rX0ItVEVjHc?t=4681 (it’s at the right time ;) )

                          1. 1

                            That is a great talk. One slide gave me pause, though; he says they don’t use the following parts of C++:

                            • Exceptions
                            • Templates
                            • iostream
                            • Multiple inheritance
                            • Operator overloading
                            • RTTI
                            • STL

                            At that point… why not just use C?

                            1. 3

                              He actually goes on to reply this very question that someone in the audience asks. Basically he would personally prefer C99, but the team uses C++ because of convenience. (then he would even point out that msvc supports c++ better than c)

                              1. 1

                                Oh, I stopped the video before the end of the audience questions as I wasn’t really finding them very enlightening… I should have stuck it out to the end!

                              2. 1

                                At that point… why not just use C?

                                For a great many years, games’ industry C++ for many shops could be charitably described as “C with classes”.

                                This makes a great deal of sense when you consider both the quality of vendor and Microsoft compilers at the time, as well as how poorly the common STL implementations of the era matched the requirements of game development.

                                1. 2

                                  I’d agree with everything on that list except templates. The first thing I ever saw that convinced me I should look at templates seriously was the code for a game engine, which had a load of clean abstractions for vector and matrix arithmetic and (after inlining) compiled down to an incredibly efficient set of MMX operations (which should give you an idea of how long ago this was). A single tweak to a template parameter let them build MMX and non-MMX versions of the hot parts of the engine and select the version to use at load time via cpuid. The only way of getting that mix of performance and usability without templates is to have something else generate the C++ code from a higher level abstraction.

                                  For high-performance code today, C++ templates are my most useful tool for creating maintainable code that executes fast. Most recently, I wrote a C++ implementation of memcpy that outperforms hand-written assembly implementations on multiple architectures and yet defines the architecture-specific bits by tuning a few template parameters. Writing it with templates made it possible to do a parameter sweep of a bunch of heuristics and find the sweet spots very quickly. I could write the same code in the end without templates but it would be a lot more fragile.

                                  1. 1

                                    I think his argument against templates was mostly about how much they slow down the build process (and possibly the increase in code size? I can’t remember now if he said that or not). In the past, I’ve used templates with explicit instantiation: at least then you are always made aware of when you’re adding another implementation to the code.

                                    I have mixed feelings about templates but Eigen is great.

                                    1. 1

                                      Compile time is definitely an issue with templates, but often it’s a trade of compile time against run time and, in general, people run code more times than they build it[1] and so a 100% increase in compile times in exchange for a 1% speedup is a big win (for release builds, at least). The code size issue is definitely important because this can make it easy to write code that does well in microbenchmarks and very badly in macrobenchmarks. In general, my recommendation for templates is to always add explicit always- or never-inline attributes on every method in a templated class and on every templated function so that you think about where the cost is paid. If your method is not either small, or exposes opportunities to make it small after inlining, then templates are probably the wrong tool for the job and you should use dynamic dispatch.

                                      The last point is much easier in Rust than C++, where you can switch between dynamic dispatch and compile-time reification with a single attribute.

                                      [1] Okay, that’s not true for a bunch of my code, but it is for anything in production.

                                  2. 1

                                    I can completely understand why they would eschew all those parts of C++, I just couldn’t understand why they would use C++ at all if they’re left with “C with classes” and - as he explained in that talk - he’s not a fan of OO-style classes.

                                    However, it turns out (thanks @vamolessa) that Mike Acton would actually prefer to be using C, so the world makes sense again. Phew.

                          2. 2

                            The main trouble with these kinds of checklists is: when to apply which rules. If you go through this list for every little code change, you get death by procedure/bureaucracy. But not doing something consistently usually means it is only haphazardly applied or not at all. So how do you operationalize such checklists?

                            1. 4

                              I’d argue it isn’t so much a “checklist” as it is “these are the sorts of questions/continuing internal critiques/principles that a good engineer should have at all times when working”.

                              At some point, we’ve gotta earn the salaries we’re getting paid.

                              1. 2

                                It doesn’t work that way. You could say the same for doctors, yet explicit checklists improve healthcare outcomes. You need to explicitly think about it; implicit judgments miss a lot.

                                Moreover, many of these principles aren’t things you ‘just know’ as a good engineer. Take number 3: that requires explicitly having a colleague explain something to you, so they can verify they’ve understood.