I basically agree with you, but I’ll point out that this is a “weak man” argument, in that he’s criticizing an actually existing set of guidelines. They don’t do justice to the general viewpoint he’s arguing against, but they’re not a made up position–someone really was offering code that bad!
It’s a remarkable thing that a lot of “clean code” examples are just terrible (I don’t recognize these, but Bob Martin’s are quite awful). I think a significant problem is working with toy examples–Square and Circle and all that crap (there’s a decent chance that neither the readers nor the authors have ever implemented production code with a circle class).
Conversely, I’d like to see Muratori’s take on an average LOB application. In practice, you’ll get more than enough mileage out of avoiding n+1 queries in your DB that you might never need to worry about the CPU overhead of your clean code.
Edit: clarifying an acronym. LOB = Line of Business–an internal tool, a ticket tracker, an accounting app, etc. something that supports a company doing work, but is typically not at huge scale.
Conversely, I’d like to see Muratori’s take on an average LOB application. In practice, you’ll get more than enough mileage out of avoiding n+1 queries in your DB that you might never need to worry about the CPU overhead of your clean code.
I have not enough upvotes to give.
It’s really not that performance isn’t important - it really really is! It’s just that, what goes into optimizing a SQL query is totally different than what goes into optimizing math calculations, and games have totally different execution / performance profiles than business apps. That context really matters.
But you don’t understand! These guys like Muratori and Acton are rockstar guru ninja wizard god-tier 100000000x programmers who will walk into the room and micro-optimize the memory layout of every object in your .NET web app.
And if you dare to point out that different domains of programming have different performance techniques and tradeoffs, well, they’ll just laugh and tell you that you’re the reason software is slow.
Just don’t mention that the games industry actually has a really terrible track record of shipping correct code that stays within the performance bounds of average end-user hardware. When you can just require the user to buy the latest top-of-the-line video card three times a year, performance is easy!
(this is only half joking – the “you’re the reason” comment by Acton gets approvingly cited in lots of these threads, for example)
When you can just require the user to buy the latest top-of-the-line video card three times a year, performance is easy!
You cite Acton who, working at Insomniac, primarily focused on console games–you know, where you have a fixed hardware budget for a long time and can’t at all expect upgrades every few months. So, bad example mate.
And if you dare to point out that different domains of programming have different performance techniques and tradeoffs, well, they’ll just laugh and tell you that you’re the reason software is slow.
Acton’s been pretty upfront about the importance of knowing one’s domain, tools, and techniques. The “typical C++ bullshit” he’s usually on about is caused by developers assuming that the compiler will do all their thinking for them and cover up for a lack of basic reasoning about the work at hand.
And yet is held up again and again as an example of how the games industry is absolutely laser-focused on performance. Which is a load of BS. The games industry is just as happy as any other field of programming to consume all available cycles/memory, and to keep demanding more. And the never-ending console upgrade treadmill is slower than the never-ending PC video card upgrade treadmill, but it exists nonetheless and is driven by the needs of game developers for ever-more-powerful hardware to consume.
Acton’s been pretty upfront about the importance of knowing one’s domain, tools, and techniques.
The “you’re the reason why” dunk from Acton was, as I understand it, a reply to someone who dared suggest to him that other domains of programming might not work the same way his does.
The “you’re the reason why” dunk from Acton was, as I understand it, a reply to someone who dared suggest to him that other domains of programming might not work the same way his does.
He was objecting to someone asserting that Acton was working in a very narrow, specialised field. That in most cases performance is not important. This is a very widespread belief, and a very false one. In reality performance is not a niche concern. When I type a character, I’d better not notice any delay. When I click on something I’d like a reaction before 100ms. When I drag something I want my smooth 60 FPS or more. When I boot a program I’d better take less than a second to boot unless it has a very good reason to make me wait.
Acton was unkind. I feel for the poor guy. But the attitude of the audience member, when checked against reality, is utterly ridiculous, and deserves to be publicly ridiculed. People need to laugh at the idea that performance does not matter most of the time. And people who actually subscribed to this ridiculous beliefs should be allowed to pretend they didn’t really believe it.
Sufficient performance rarely requires actual optimisations, but it always matters.
He was objecting to someone asserting that Acton was working in a very narrow, specialised field. That in most cases performance is not important.
Well. This:
As I was listening to your talk, I realise that you are working in a different kind of environment than I work in. You are working in a very time constrained, hardware constrained environment.
[elided rant]
Our company’s primary constraint are engineering resources. We don’t worry so much about the time, because it’s all user interface stuff. We worry about how long it takes a programmer to develop a piece of code in a given length of time, so we don’t care about that stuff. If we can write a map in one sentence and get the value out, we don’t care really how long it takes, so long —
Then Mike Acton interrupted with
Okay, great, you don’t care how long it takes. Great. But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.
Meanwhile: I bet Mike Acton doesn’t work exclusively in hand-rolled assembly. I bet he probably uses languages that are higher-level than that. I bet he probably uses tools that automate things in ways that aren’t the best possible way he could have come up with manually. And in so doing he’s trading off some performance for some programmer convenience. Can I then retort to Mike Acton that people like him are the reason some app he didn’t even work on is slow? Because when we reflect on this, even the context of the quote becomes fundamentally dishonest – we all know that he accepts some kind of performance-versus-developer-convenience tradeoffs somewhere. Maybe not the same ones accepted by the person he was trying to dunk on, but I guarantee there are some. But haggling over which tradeoffs are acceptable doesn’t let him claim the moral high ground, so he has to dunk on the idea of the tradeoff itself.
So again: “People like you are the reason that it takes 30 seconds to open Word” is not a useful statement. It’s intended solely as a conversation-ending bludgeon to make the speaker look good and the interlocutor look wrong. There’s no nuance in it. There’s no acknowledgment of complexity or tradeoffs or real-world use cases. Ironically, in that sense it almost certainly violates some of his own “expectations” for “professionals”. And as I’ve explained, it’s inherently dishonest!
And in so doing he’s trading off some performance for some programmer convenience. Can I then retort to Mike Acton that people like him are the reason some app he didn’t even work on is slow?
According to his talks, he 100% uses higher-level languages where it makes sense–during the programming. The actual end-user code is kept as fast as he can manage. It’s not a sin to use, say, emacs instead of ed provided the inefficiency doesn’t impact the end-user.
the interlocutor look wrong.
The interlocutor was wrong, though. They said:
If we can write a map in one sentence and get the value out, we don’t care really how long it takes,
That’s wrong, right? At the very least, it’s incredibly selfish–“We don’t care how much user time and compute cycles we burn at runtime if it makes our job at development time easier”. That’s not a tenable position. If he’d qualified it as “in some percent of cases, it’s okay to have thus-and-such less performance at runtime due to development speed concerns”, that’d be one thing…but he made a blanket statement and got justifiably shut down.
(Also: you never did answer, in the other thread, if you have direct first-hand experience of this stuff beyond Python/Django. If you don’t, it makes your repeated ranting somewhat less credible–at least in my humble opinion. You might lack experience with environments where there really is a difference between costs incurred during development time vs compile time vs runtime.)
It’s not a sin to use, say, emacs instead of ed provided the inefficiency doesn’t impact the end-user.
And yet when someone came forward saying they had a case where inefficiency didn’t seem to be impacting the end-user, he didn’t sagely nod and agree that this can be the case. Instead he mean-spiritedly dunked on the person.
Also: you never did answer, in the other thread, if you have direct first-hand experience of this stuff beyond Python/Django.
Of what stuff? Have I written C? Yes. I don’t particularly like or enjoy it. Same with a variety of other languages; the non-Python language I’ve liked the most is C#.
Have I written things that weren’t web apps? Yes, though web apps are my main day-to-day focus at work.
If you don’t, it makes your repeated ranting somewhat less credible–at least in my humble opinion.
Your repeated attempts at insulting people as a way to avoid engaging with their arguments make your comments far less credible, and I neither feel nor need any humility in telling you that.
Anyway, I’ve also pointed out, at length, multiple times, why the games industry is very far from being a paragon of responsible correctness-and-performance-focused development, which would make your entire attempt to derail the argument moot.
So. Do you have an actual argument worthy of the name? Or are you just going to keep up with the gatekeeping and half-insulting insinuations?
And yet when someone came forward saying they had a case where inefficiency didn’t seem to be impacting the end-user, he didn’t sagely nod and agree that this can be the case.
Check your transcript, ypu’re putting words into the dude’s mouth. The dude said, specifically, that they didn’t care at all about UI performance. Not that the user didn’t care–that the developers didn’t care. He was right to be slapped down for that selfishness.
Your repeated attempts at insulting people as a way to avoid engaging with their arguments make your comments far less credible, and I neither feel nor need any humility in telling you that.
As I said in the other thread, it has some bearing here–you’ve answered the question posed, so I can discuss accordingly. If you get your feathers ruffled at somebody wondering about the perspective of somebody who only lists python and webshit on their profile in a discussion on optimizing compiled languages and game development, well, sorry I guess?
I’ve also pointed out, at length, multiple times, why the games industry is very far from being a paragon of responsible correctness-and-performance-focused development
I think you’ve added on the “correctness” bit there, and correctness is something that has a large enough wiggle room (do we mean mathematically sound? do we mean provable? do we mean no visible defects? do we mean no major defects?) that I don’t think your criticism is super helpful.
The performance bit, as you’ve gone into elsewhere, I understand as “Some game developers misuse tools like Unity to make slow and shitty games, so clearly no game developer cares about performance”. I don’t think that’s a fair argument, and I especially think it’s incorrect with regards specifically here to Acton who has built an entire career on caring deeply about it.
Also, you didn’t really answer my observation about development time inefficiency versus runtime inefficiency and how it applied in Acton’s case.
Anyways, this subthread has gotten long and I don’t think we’re much farther in anything. Happy to continue discussion over DMs.
If you get your feathers ruffled at somebody wondering about the perspective of somebody who only lists python and webshit on their profile
You’re the one who can’t even refer to web development without needing to use an insulting term for it.
a discussion on optimizing compiled languages and game development
Well, no. The claim generally being made here is that people like Acton uniquely “care about performance”, that they are representative of game developers in general, and thus that game developers “care about performance” while people working in other fields of programming do not. And thus it is justified for people like Acton to dunk on people who explain that the tradeoffs are different in their field (and yes, that’s what the person was trying to do – the only way to get to the extreme “doesn’t care” readings is to completely abandon the principle of charity in pursuit of building up a straw man).
Which is a laughable claim to make, as I’ve pointed out multiple times. Game dev has a long history of lackluster results, and of relying on the hardware upgrade treadmill to cover for performance issues – just ship it and tell ’em to upgrade to the latest console or the latest video card!
And it is true that the performance strategies are different in different fields, but Acton is the one who did a puffed-up “everyone who doesn’t do things my way should be fired” talk – see the prior thread I linked for details – which tried to impose his field’s performance strategies as universals.
Like I said last time around, if he walked into a meeting of one of my teams and started asking questions about memory layout of data structures, he wouldn’t be laughed out of the room (because that’s not how my teams do things), but he would be quietly pulled aside and asked to get up to speed on the domain at hand. And I’d expect exactly the same if I walked into a meeting of a game dev team and arrogantly demanding they list out all the SQL queries they perform and check whether they’re using indexes properly, or else be fired for incompetence.
[quote] Is how you chose to describe it last time we went round and round about this.
Ah, thanks for the link. My only choice there was the elision, it was otherwise an exact transcript.
People like you are the reason that it takes 30 seconds to open Word” is […] intended solely as a conversation-ending bludgeon to make the speaker look good and the interlocutor look wrong.
I agree partially. It is a conversation-ending bludgeon, but it was not aimed directly at the interlocutor. Acton’s exact words were “But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.” He was ridiculing the mindset, not the interlocutor. Which by the way was less unkind than I remembered.
I know it rubs you the wrong way. I know it rubs many people the wrong way. I agree that by default we should be nice to each other. I do not however extend such niceties to beliefs and mindsets. And to be honest the mindset that by default performance does not matter only deserves conversation-ending bludgeons: performance matters and that’s the end of it.
And to be honest the mindset that by default performance does not matter
Except it’s a stretch to say that the “mindset” is “performance does not matter”. What the person was very clearly trying to say was that they work in a field where the tradeoffs around performance are different. Obviously if their software took, say, a week just to launch its GUI they’d go and fix that because it would have reached the point of being unacceptable for their case.
But you, and likely Acton, took this as an opportunity to instead knock down a straw man and pat yourselves on the back and collect kudos and upvotes for it. This is fundamentally dishonest and bad behavior, and I think you would not at all enjoy living in a world where everyone else behaved this way toward you, which is a sure sign you should stop behaving this way toward everyone else.
Other Lobsters are entitled to the original context, so here it is: Mike Acton was giving a keynote at CppCon. After the keynote it was time for questions, and one audience member came with this intervention. Here’s the full transcript:
As I was listening to your talk, I realise that you are working in a different kind of environment than I work in. You are working in a very time constrained, hardware constrained environment.
As we all know the hardware, when you specify the hardware as a platform, that’s true because that’s the API the hardware exposes, to a developer that is working inside the chip, on the embedded code, that’s the level which is the platform. Ideally I suspect that the level the platform should be the level in which you can abstract your problem, to the maximum efficiency. So I have a library for example, that allows me to write code for both Linux and Windows. I have no problem with that.
Our company’s primary constraint is engineering resources. We don’t worry so much about the time, because it’s all user interface stuff. We worry about how long it takes a programmer to develop a piece of code in a given length of time, so we don’t care about all that stuff. If we can write a map in one sentence and get the value out, we don’t care really how long it takes, as long as it’s… you know big O —
Acton then interrupts:
Okay, great, you don’t care how long it takes. Great. But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.
But, you know, whatever, that’s your constrain, I mean, I don’t feel that’s orthogonal to what we do. We worry about our resources. We have to get shit out, we have to build this stuff on time, we worry about that. What we focus on though is the most valuable thing that we can do, we focus on understanding the actual constraints of the problem, working to actually spend the time to make it valuable, ultimately for our players, or for whoever your user is. That’s where we want to bring the value.
And performance matters a lot to us, but I would say, any context, for me personally if I were to be dropped into the “business software development model” I don’t think my mindset would change very much, because performance matters to users as well, in whatever they’re doing.
Thanks for including the full transcript. I didn’t realize the questioner was a “Not a question, just a comment” guy and don’t think we should judge Mike Acton based on how he responded. Comment guys suck.
This is a guy who thought “Everyone Watching This Is Fired” was a great title for a talk. And not just that, a talk that not only insisted everyone do things his way, but insisted that his way is the exclusive definition of “professional”.
I worked with GUIs in a past life, and debugged performance problems. I’ve never seen a 30-second wait that was primarily due to user interface code, or even a five-second wait.
One can alwys discuss where in the call stack the root of a problem is. The first many-second delay I saw (“people learn not to click that button when there are many patients in the database”) was due to a database table being sorted as part of processing a button click. IMO sorting a table in UI code is wrong, but the performance problem happened because the sorting function was somewhere between O(n²) and O(n³): A fantastic, marvellous, awesome sorting algorithm. I place >90% of the blame in the sorting algorithm and <10% in the UI code that called the function that called the sorting marvel.
In my experience, delays that are primarily due to UI code are small. If the root reason is slow UI code, then the symptoms are things like content is updated slowly enough to see jerking. Annoying, sure, but I agree that quick development cycles can justify that kind of problem.
The many-second delays that Acton ascribes to slow UI code are more typically due to UI code calling some sparsely documented function that turns out to have problems related to largish data sets. Another example I remember is a function that did O(something) RPCs and didn’t mention that in its documentation. Fine on the developers’ workstations, not so fine for production users with a large data set several ms away.
I’m sure Mike Acton conflated the two by mistake. They’re not the same though.
I realise that neither the audience member nor Mike Acton mentioned GUI specifically. Acton merely observes that because some people don’t care enough about performance, some programs take an unacceptably long time to boot. I’m not sure Acton conflated anything there.
I might have.
As for the typical causes of slowdowns, I think I largely agree with you. It’s probably not the GUI code specifically: Even Electron can offer decent speed for simple applications on modern computers. It’s just so… huge.
Acton is still taking the least charitable possible interpretation and dunking on that. And he’s still doing it dishonestly – as I have said, multiple times now, it’s an absolute certainty that he also trades off performance for developer convenience sometimes. And if I were to be as uncharitable to him as he is to everyone else I could just launch the same dunk at him, dismiss him as someone who doesn’t care about performance, say he should be fired, etc. etc.
And the never-ending console upgrade treadmill is slower than the never-ending PC video card upgrade treadmill, but it exists nonetheless and is driven by the needs of game developers for ever-more-powerful hardware to consume.
If you compare the graphics of newer games to older ones, generally the newer ones have much more detail. So this improved hardware gets put to good use. I’m sure there are plenty of games that waste resources, too, but generally speaking I’d say things really are improving over time.
I haven’t touched a new game in 10, 15 years or so, but I hear games like The Last of Us win critical acclaim, probably due to their immersiveness which is definitely (also) due to the high quality graphics. But that wasn’t really the point - the point was that the hardware specs are put to good use.
I dunno, I feel like games from ten years ago had graphics that were just fine already. And really even 15 years ago or more, probably. We’re mostly getting small evolutionary steps — slightly more detail, slightly better hair/water – at the cost of having to drop at least hundreds and sometimes thousands of dollars on new hardware every couple of years.
If I were the Mike Acton type, I could say a lot of really harsh judgmental things about this, and about the programmers who enable it.
These guys like Muratori and Acton are rockstar guru ninja wizard god-tier 100000000x programmers who will walk into the room and micro-optimize the memory layout of every object in your .NET web app.
Muratori is on the record saying, repeatedly, that he’s not a great optimiser. In his Refterm lecture he mentions that he rarely optimises his code (about once a year).
It’s just that, what goes into optimizing a SQL query is totally different than what goes into optimizing math calculations, and games have totally different execution / performance profiles than business apps.
The specifics differ (e.g., avoiding L2 cache misses vs. avoiding the N+1 problem), but the attitude of the programmers writing those programs matters also. If their view is one where clean code matters, beautiful abstractions are important, solving the general problem instead of the concrete instance is the goal, memory allocations are free, etc. they will often miss a lot of opportunities for better performance.
I will also add that code that performs well is not antithetical to code that is easy to read and modify. Fully optimized code might be, but we can reach very reasonable performance with simple code.
I think it’s subsequently been deleted, but I remember a tweet that showed a graph, clarity on the y axis, performance on the x axis.
Near the origin: initial code; poorly written, and slow.
Up and to the right: first passes of optimization often make the code nicer, better structured, remove duplication, pointless objects.
Back down and to the right, near the x-axis: chasing harder to find optimizations, adding complexity for the sake of performance. But much faster than the starting phase. “PM should start to worry here”
Sharply down and a little more to the right: “lovecraftian madness, terrifying code distorted for the sake of eking out the last drop of performance improvement.”
I would say that is most cases it still fundamentally boils down to how data is laid out, accessed, and processed. Whether an SQL query or later processing of data loaded from an SQL query, focus on data layout and accesses is the core of a lot of performance.
I’m normally a fan of Casey’s work, but yeah this one’s not great. The clean code vs fast code battle doesn’t really help the article, and since he “only” demonstrates a 15x speedup it doesn’t actually address the problem we face today.
If all the software I used was 15x slower than optimal, almost everything I did on my computer would complete instantly, but in reality I would guess the majority of the software I use ranges from 1000x to tens of millions of times too slow. Here are some examples I got annoyed by in the last few days:
Warm starting a terminal emulator that has high performance as a selling point takes 2 seconds (not including shell startup time! imagine if initialising an empty text box took 2s everywhere else!!)
For reference Terminal.app takes 3s cold/1s warm
Typing in the same terminal emulator on Linux has enough latency to reduce my typing accuracy
Cold starting lldb on a Metal hello world app takes 29 seconds to launch the application
Warm starting lldb on the Metal hello world app takes 6 seconds
Cold switching between categories in the macOS system preferences app takes 2-5s
Stepping over code that takes less than 1 nanosecond to execute in the VSCode debugger has perceptible latency
(These measurements were taken by counting seconds in my head because even with 50% error bars they’re still ridiculous, on a laptop whose CPU/RAM/SSD benchmark within 10x of the best desktop hardware available today)
That sound ridiculously slow, and to me mostly related to macOS. I’m on a decent laptop, using Ubuntu and do no experience such crazy delays.
Which “fast” terminal emulator are you using? I’m using alacritty’s latest version, and can’t notice any delay in either startup or typing latency (and I’m usually quite sensitive to that). Even gnome-terminal starts reasonably fast, and it’s not known for being a speed beast.
For the lldb test, I get <1s both cold and warm, no noticeable difference between the two.
A nonzero amount of it is on macOS yes, e.g. the first time you run lldb/git/make/etc after a reboot it has to figure out which of the one version of lldb/git/make/etc you have installed it should use, which takes many seconds. But it is at least capable of warm booting non-trivial GUI apps in low-mid hundreds of ms so we can’t put all the blame there.
Which “fast” terminal emulator are you using?
It’s alacritty. I’ve tried it on all 3 major OS with mixed results. On Windows startup perf is good, typing latency is good but everything is good at 280Hz, and it’s so buggy as to be near unusable (which makes it the second best Windows terminal emulator out of all the ones I’ve tried). On macOS startup perf is bad but typing latency is fine. On Linux startup perf is fine but typing latency makes it unusable.
I agree that performance matters, but what’s going on here amounts to optimization, and optimization does not always lead to simpler code. In fact, there’s a statement in here:
And to do this, we used nothing other than one table lookup and a single line of code! It’s not only much faster, it’s also much less semantically complex. It’s less tokens, less operations, less lines of code.
I don’t actually agree that the solution there is semantically simpler. Less AST tokens and less code are measures of syntax, not semantics. I view the semantics of this table version as more complex, with the definition of complex being “things that are intertwined.” The structure of each shape is now completely intertwined in this clever coefficient calculation. What happens if we want to calculate the area of a trapezoid? (1/2(a + b) * h). It doesn’t fit neatly into the CTable[Shape.Type]*Shape.Width*Shape.Height calculation now.
The purpose of separating code via polymorphism is that, instead of finding some clever common expression of a varied problem, you can just have each thing run its own calculations, separately. Sometimes separating calculations like this is more organized, and sure I guess that depends on your personal aesthetic. But, for the new trapezoid calculation, it’s as simple as defining the new calculation, with no coupling to the existing other shapes.
I get that a 10x performance improvement is presented here, and that is definitely worth listening to. But, that’s also what you get when you focus on one dimension and treat other dimensions as less important. Performance is important, but it’s not the only dimension.
Finally, I think the root of this issue is that “clean” code attempts to be more specification-like, and that obviously can come with a performance cost. I think it’s better to legitimately just separate the specification from the implementation, and then you get the best of both worlds. You have a spec that’s focused on clarity and connection to the problem domain, and you can optimize the implementation all you want and not care about clarity because you have a spec to fall back on. The issue is that we’re trying to jam all of these different dimensions into one artifact: a single program’s source code.
I agree that performance matters, but what’s going on here amounts to optimization—
It does not (amount to optimisation). This is just avoiding making the program slow with unnecessary indirections & obliviousness to the particulars of the problem you’re solving. You can do that and keep the code very simple. And as far as I can tell he did.
I fail to see how this line of code has anything to do with the problem domain:
If you can’t understand a 6-lines function I can’t help you.
But don’t be ridiculous, you do understand it, and you know how this line of code relates to the problem domain. It’s pretty obvious from the context surrounding it, you’re just choosing to play dumb and ignore it.
The problem domain is calculating the areas of shapes. I have never, ever heard of storing a list of coefficients together in any conversation about geometry. I didn’t say that I don’t understand the code, I said that it is not a faithful representation of the problem domain of geometry.
If you look up shape calculations in any math textbook, it would have the area calculations grouped together with each shape. In fact, that’s the organization of the first link I clicked when searching “math shape areas.” That is the clear domain-oriented grouping, whereas the code is grouped for efficient calculation. Aka it is optimized.
Other than that, you seem to be quite angry about this conversation. It helps to stay focused on the actual points of discussion, vs. having a conspiracy theory about my intent here. And it’s also an ok outcome for us to disagree.
It helps to stay focused on the actual points of discussion, vs. having a conspiracy theory about my intent here.
I have two choices here: either I believe you when you say you don’t know see how this table relates to the problem domain… or I don’t. You either are dumb, or play dumb. If you are dumb (or untrained), I can’t help you, and the conversation stops. Re-watch the videos, do some more programming, go back to school, that kind of thing. If you play dumb we can still talk but then admit you were lying for whatever reason.
My hypothesis right now is that you were playing dumb for rhetoric effect. It kind of got to me.
I have never, ever heard of storing a list of coefficients together in any conversation about geometry.
I understand this line of code is not obvious at a glance. If this was real code and not a video a comment would definitely help. Still, who cares that storing lists of coefficients doesn’t come up in conversations about geometry? This is not a conversation about geometry, this is a demonstration of how we might tell the computer to solve a specific problem, which happens to involve geometry. And this table driven stuff is relatively common, it’s applied to all kinds of problems.
You dispute the simplicity of this code, but it looks like your real reason is a lack of familiarity. Programmers familiar with table driven code would on the other hand feel this is very simple, almost obvious. But feelings are an unreliable metric. Code size however, is more objective than we realise: it’s one of the best proxy for complexity we have: very cheap to measure, and strongly correlated with pretty much all the other complexity metrics (as well as cost, bug count…). The Making Software book mentions this.
There are exceptions of course, but I don’t think this is one of them. I can’t justify it any further, but I really think this code is as simple as its size suggests.
And it’s also an ok outcome for us to disagree.
It is. I just like to dig in and at least know why we disagree. Doesn’t work out often, though. :-/
I have two choices here: either I believe you when you say you don’t know see how this table relates to the problem domain… or I don’t.
Let me try and come at this another way, because it seems like we have different definitions of the phrase “problem domain.” This is likely the root of our different mindsets. For me, the problem domain is completely independent of a computer. When we’re talking about shapes and their areas, the problem domain is geometry. Abstract geometry, i.e. only in a mathematical sense.
If you asked a mathematician to write some pseudocode for describing area calculations, they would write something like this:
shape Rectangle:
height: Int
width: Int
Area = height * width
end
shape Triangle:
height: Int
width: Int
Area = 0.5 * height * width
end
That is, they’d group the area calculations along with their shapes. And here’s a link that confirms my belief here - shape definitions grouped with their area calculation.
Are you familiar with Domain Driven Design? If not, the idea is that it’s important to capture business rules in the language of the problem domain, separating those rules from the concerns of computers as much as possible. It’s not a law, but it’s a point of view - you can disagree with that point of view, but I wouldn’t call this philosophy “dumb.”
This is not a conversation about geometry, this is a demonstration of how we might tell the computer to solve a specific problem, which happens to involve geometry.
You used the word “computer,” here. Are you starting to understand our differences better yet? I’m talking about how to represent the problem in a way that’s agnostic of the underlying machine. For you, they are intertwined. I align with the DDD philosophy in general, and also the old SICP quote of “Programs are meant to be read by humans and only incidentally for computers to execute.”
I understand what you’re saying here in that the table-based code is easy enough to understand. With a problem this small, the differences are almost meaningless. But we’re using this small example as a proxy for talking about software in general. Maybe my generalizing is causing you to think I legitimately don’t understand how the table-based code works. I understand how it functions - it’s trivial.
What I’m trying to do is use this example to show that I believe in separating specification of a problem from its implementation. In fact - how would it make you feel if I agreed that ultimately the table-based code is a good enough implementation, but it loses out on some clarity as a specification of the problem? Performance is very important in a final implementation, though I’m personally willing to sacrifice that to keep the code more specification-like. I understand some people aren’t, and wouldn’t call them “dumb.”
What I’m trying to do is use this example to show that I believe in separating specification of a problem from its implementation. In fact - how would it make you feel if I agreed that ultimately the table-based code is a good enough implementation, but it loses out on some clarity as a specification of the problem?
I do agree, actually: there are indeed better ways to make the code look like specification. (I don’t think the “clean code” version is that, but that’s a separate debate.)
The way I understand it, the Domain Driven approach is clearly aimed at non-programmers. People who don’t care about computers, and just want to express their problem in the language they’re most familiar with (and if the field is any good, that language will be close to ideal in simplicity and expressivity). In this toy example it would be important to group the definition of shapes and their area calculations, because that’s what domain experts do. I actually agree with that approach: giving domain experts an unambiguous way to express their wishes and have those realised in a short feedback loop or real time is extremely valuable.
Doing so however completely ignores the computational problems beyond big O (and sometimes even big O). And though in many cases the computer will be fast enough anyway, in many other cases it won’t be. That’s where programmers come in: people who can encode the solution in a way that will work well enough on the computer: that has few enough bugs, uses few enough resources, and runs fast enough.
To do that we often can’t ignore the hardware. There’s one or more CPUs, a cache hierarchy, disks, a network, various throughputs & latencies… and beyond being a fairly messy reality some of those limitations are fundamentally baked into the universe in the form of the speed of light or thermodynamics. See for instance Daniel J. Bernstein paper on parallel brute force attacks, that talks of hardware that doesn’t exist, but could.
So yeah, a programmer is likely to take your problem domain and transform it into an utterly unrecognisable program, using techniques only programmers know. Even if she writes her program for other humans to read, those humans are programmers too, and they ought to recognise the techniques used there. Thus, the cost in readability among programmers for not ignoring the performance characteristics of the machine, may not be all that prohibitive, even though the poor end user is completely lost. But that’s why they called the programmer in the first place, didn’t they?
Another thing to keep in mind is that the faster & leaner our programs are, the more we can ultimately do: more programs, bigger data sets, more stringent usage patterns… so even if we’re not initially constrained by the hardware, as demands grow we inevitably become so.
This could explain why Muratori hates Python so much: it’s not a programmer’s tool. It ignores the hardware too much, wastes a ton of resources and time at the altar of productivity. And that’s fine for the Domain Driven approach, where one just want to solve a problem in the most expedient way possible, using a notation that’s not too far from their domain.
This Domain Driven advantage is why I do use Python, even in cases where I think it’s way too slow (I hate waiting for it to generate my test vectors). Because sometimes what I really need is a modeller’s language.
I feel this article is just a backlash against the haters who keep telling the author that they’re writing terrible code, never mind the fact that the code is both correct and fast. I’m not in game development, but a backend dev for web applications, where I’ve had similar experiences with people telling me my SQL-generating code is hard to understand without offering a proper alternative that performs at an acceptable level. It’s rather aggravating, to be honest. Typically, you don’t want to write difficult code (of course code should be elegant!), but the various constraints can sort of “push” your code into a certain shape.
Perhaps it’s because I’ve also really come to appreciate other people’s code that might be difficult to understand initially, but once you understand it is actually quite hackable, like GC, compiler internals or regex engines. These things really resist being written in the “clean” (read: naive) way, because they have to take into account various lower-level machinery that seeps up into the higher levels (think for example data-oriented programming). That doesn’t mean that they have to be complicated, they’re just a little difficult to initially grok.
At the same time, many of the “clean code” adherents write horrible monstrosities of abstracted-out code that’s both hard to understand initially, hard to maintain, and has shitty performance. (</rant>)
but the various constraints can sort of “push” your code into a certain shape.
Why should there be backlash though, when we all know there are an enormous amount of constraints like this, that are mostly all conflicting? Software is hard. People try to make sense of the complexity with simple rules, but simple rules don’t work.
Isn’t the answer to say: “I’m ok with a performance penalty here, because…” or “performance is the most important thing here, because…” ?
Others folks have pointed to this, but the main difference is is context–in a game, if you can’t keep to your frame budget (say, make 60 FPS on a modern PC, where nano/microseconds can add up) then that can lead to poor reviews, and significant loss of potential revenue. Conversely, for a regular line of business application, then unless you’re huge, then the kind of performance loss he mentions may mean you need to use a few more CPU cores. There are of course absurd cases (like using cubic algorithms where a linear one would work)
As a broad statement, about seven years ago one developer day (as charged out by a consultancy) cost about the same as two m1.large EC2 instances for a month. So if you can save a significant amount of compute cost then that might be worth a few days of work. Conversely though, there’s the opportunity cost of what else they could be working on too. Eg: additional revenue from a new feature might well dwarf any cost savings on compute.
if you can’t keep to your frame budget (say, make 60 FPS on a modern PC, where nano/microseconds can add up) then that can lead to poor reviews, and significant loss of potential revenue
I’m old enough to remember the jokes about Crysis’ hardware requirements (and even older occurrences of games requiring top-of-the-line hardware that didn’t become full-blown memes).
I remember playing the original Kerbal Space Program happily on an old Macbook Air. Meanwhile, KSP2 does not look like a giant leap in quality, but has jaw-dropping hardware requirements.
And the literal best-selling video game in history – Minecraft – has a whole stack of community-maintained addons dedicated to making its performance acceptable.
And this is without getting into AAA games and their multi-gigabyte launch-day patches, etc.
So my response any time someone starts talking about games are some unique field where people not only care about, but have to care about performance and correctness, is unprintable.
Games are not special. Games are just as bad, when considered fairly, as every other kind of software out there. Game developers also are not special. Are there some who do aggressively care about performance and correctness? Sure. There are also people in other fields of programming like that. Is Casey Muratori or Mike Acton representative of the average game developer or even of the average game developer’s mindset? Pardon me while I catch my breath from laughing too hard.
… in the video, there’s text on his shirt, and it reads the correct way, and he writes in front of him, and it also reads the correct way, although he’s viewing the text from the opposite site that the viewer is viewing it from. If it’s correct for us, it’s backwards for him, and vice versa. Is he writing backwards, or did he get a backwards shirt specifically for this and flips the whole video?
Funny, I just assumed because I’ve seen other people do it the backwards writing way. Getting the backwards printed just for this instead of a wearing a solid one is dedication.
The polymorphic code would be competitive if the polymorphism was handled at the array level (instead of one array of objects we’d have one array of triangles, one of squares, etc). This avoids the vtable lookup (or amortizes it, depending on how you do it); is robust against adding shapes whose area calculation doesn’t fit the table lookup approach; and would avoid over allocating on the simpler objects (now squares and circles are structs containing a single float).
It’s also worth noting that the compiler can devirtualise the polymorphic code if it knows that the set of classes is closed, which is the assumption that the programmer is making with the other implementation.
If the superclass is in an anonymous namespace then it knows. I don’t know if it will lower to a switch in this case, the devirtualisation support in LLVM is still pretty new, but it’s quite plausible. Whether it’s a good idea is more complex, it depends on whether inlining exposes more opportunities for optimisation and on whether the costs of the methods are small relative to the cost of the call. This is often not the case for C++ code.
I see the goal of clean code to favor the future, fast code to favor the past.
I always hear a particular metric about coding is that is 80% is reading, 20% writing. So it is natural for me to pay attention to the former (even when I am the only member of the project, since I have to read that again after years maybe).
Another thing is, I never saw simple code to fulfill a complex requirement, ever. Even if the code seems simple, complexity may be shifted in a different layer.
I always hear a particular metric about coding is that is 80% is reading, 20% writing. So it is natural for me to pay attention to the former (even when I am the only member of the project, since I have to read that again after years maybe).
It stood out to me when I read this (I know everyone says it)… what percentage of coding is it that we expect running to take? 80% reading, 20% writing, 0% running? I’m not arguing with you I am just surprised now that I think of it that this saying completely ignores running.
That’s a really good aspect! After I gave it a few toughts, I think “coding” (either reading or writing) drains my cognitive capacities, therefore I want to keep it optimized. Running software (or compiling, rendering, testing, etc. for that matter) are done in my “idle” time, taking the hardware resources instead of mines.
If it would only be me whose resources I want to spare with this approach, I’d call it selfish, although there could be many of my teammates who would also suffer from non-clean code and I don’t want to hurt them for obvious reasons.
Devs are notoriously “picky” of what and how they are working with (looking at you, DX).
If I were developing the kinds of applications Casey was, I’d probably agree very strongly with these points.
I find in application development where you are working with large abstractions below you (browsers, frameworks), some version of “clean code” often leads you towards the fast paths.
For the few times you’re coding in the hot spots, yea throw away “design patterns”, make it fast, but its insane to me to think of these as mutually exclusive.
Another distinction I like to make is that “clean” library code, looks very different than “clean” feature code.
This is pretty awful advice. As David put it, he is beating the hell out of this strawman. The framing here is terrible. Treating “clean code advocates” as some boogie man attempting to slow down your code is… weird.. to say the least. Is he actually advocating we not use polymorphism? Yes, switching to a table lookup is faster, but how often is that actually applicable to your situation? The shape example is obviously not representative of real uses of dynamic dispatch.
Software is not slow these days because everyone switched from lookup tables to dynamic dispatch. Do not teach newcomers this.
Wow, that poor straw man, he looks like he’s in a lot of pain right now.
I basically agree with you, but I’ll point out that this is a “weak man” argument, in that he’s criticizing an actually existing set of guidelines. They don’t do justice to the general viewpoint he’s arguing against, but they’re not a made up position–someone really was offering code that bad!
It’s a remarkable thing that a lot of “clean code” examples are just terrible (I don’t recognize these, but Bob Martin’s are quite awful). I think a significant problem is working with toy examples–Square and Circle and all that crap (there’s a decent chance that neither the readers nor the authors have ever implemented production code with a circle class).
Conversely, I’d like to see Muratori’s take on an average LOB application. In practice, you’ll get more than enough mileage out of avoiding n+1 queries in your DB that you might never need to worry about the CPU overhead of your clean code.
Edit: clarifying an acronym. LOB = Line of Business–an internal tool, a ticket tracker, an accounting app, etc. something that supports a company doing work, but is typically not at huge scale.
I have not enough upvotes to give.
It’s really not that performance isn’t important - it really really is! It’s just that, what goes into optimizing a SQL query is totally different than what goes into optimizing math calculations, and games have totally different execution / performance profiles than business apps. That context really matters.
But you don’t understand! These guys like Muratori and Acton are rockstar guru ninja wizard god-tier 100000000x programmers who will walk into the room and micro-optimize the memory layout of every object in your .NET web app.
And if you dare to point out that different domains of programming have different performance techniques and tradeoffs, well, they’ll just laugh and tell you that you’re the reason software is slow.
Just don’t mention that the games industry actually has a really terrible track record of shipping correct code that stays within the performance bounds of average end-user hardware. When you can just require the user to buy the latest top-of-the-line video card three times a year, performance is easy!
(this is only half joking – the “you’re the reason” comment by Acton gets approvingly cited in lots of these threads, for example)
You cite Acton who, working at Insomniac, primarily focused on console games–you know, where you have a fixed hardware budget for a long time and can’t at all expect upgrades every few months. So, bad example mate.
Acton’s been pretty upfront about the importance of knowing one’s domain, tools, and techniques. The “typical C++ bullshit” he’s usually on about is caused by developers assuming that the compiler will do all their thinking for them and cover up for a lack of basic reasoning about the work at hand.
And yet is held up again and again as an example of how the games industry is absolutely laser-focused on performance. Which is a load of BS. The games industry is just as happy as any other field of programming to consume all available cycles/memory, and to keep demanding more. And the never-ending console upgrade treadmill is slower than the never-ending PC video card upgrade treadmill, but it exists nonetheless and is driven by the needs of game developers for ever-more-powerful hardware to consume.
The “you’re the reason why” dunk from Acton was, as I understand it, a reply to someone who dared suggest to him that other domains of programming might not work the same way his does.
He was objecting to someone asserting that Acton was working in a very narrow, specialised field. That in most cases performance is not important. This is a very widespread belief, and a very false one. In reality performance is not a niche concern. When I type a character, I’d better not notice any delay. When I click on something I’d like a reaction before 100ms. When I drag something I want my smooth 60 FPS or more. When I boot a program I’d better take less than a second to boot unless it has a very good reason to make me wait.
Acton was unkind. I feel for the poor guy. But the attitude of the audience member, when checked against reality, is utterly ridiculous, and deserves to be publicly ridiculed. People need to laugh at the idea that performance does not matter most of the time. And people who actually subscribed to this ridiculous beliefs should be allowed to pretend they didn’t really believe it.
Sufficient performance rarely requires actual optimisations, but it always matters.
Well. This:
Is how you chose to describe it last time we went round and round about this. And, yeah, my takeaway from this was to form a negative opinion of Acton.
So I’ll just paste my own conclusion:
According to his talks, he 100% uses higher-level languages where it makes sense–during the programming. The actual end-user code is kept as fast as he can manage. It’s not a sin to use, say, emacs instead of ed provided the inefficiency doesn’t impact the end-user.
The interlocutor was wrong, though. They said:
That’s wrong, right? At the very least, it’s incredibly selfish–“We don’t care how much user time and compute cycles we burn at runtime if it makes our job at development time easier”. That’s not a tenable position. If he’d qualified it as “in some percent of cases, it’s okay to have thus-and-such less performance at runtime due to development speed concerns”, that’d be one thing…but he made a blanket statement and got justifiably shut down.
(Also: you never did answer, in the other thread, if you have direct first-hand experience of this stuff beyond Python/Django. If you don’t, it makes your repeated ranting somewhat less credible–at least in my humble opinion. You might lack experience with environments where there really is a difference between costs incurred during development time vs compile time vs runtime.)
And yet when someone came forward saying they had a case where inefficiency didn’t seem to be impacting the end-user, he didn’t sagely nod and agree that this can be the case. Instead he mean-spiritedly dunked on the person.
Of what stuff? Have I written C? Yes. I don’t particularly like or enjoy it. Same with a variety of other languages; the non-Python language I’ve liked the most is C#.
Have I written things that weren’t web apps? Yes, though web apps are my main day-to-day focus at work.
Your repeated attempts at insulting people as a way to avoid engaging with their arguments make your comments far less credible, and I neither feel nor need any humility in telling you that.
Anyway, I’ve also pointed out, at length, multiple times, why the games industry is very far from being a paragon of responsible correctness-and-performance-focused development, which would make your entire attempt to derail the argument moot.
So. Do you have an actual argument worthy of the name? Or are you just going to keep up with the gatekeeping and half-insulting insinuations?
Check your transcript, ypu’re putting words into the dude’s mouth. The dude said, specifically, that they didn’t care at all about UI performance. Not that the user didn’t care–that the developers didn’t care. He was right to be slapped down for that selfishness.
As I said in the other thread, it has some bearing here–you’ve answered the question posed, so I can discuss accordingly. If you get your feathers ruffled at somebody wondering about the perspective of somebody who only lists python and webshit on their profile in a discussion on optimizing compiled languages and game development, well, sorry I guess?
I think you’ve added on the “correctness” bit there, and correctness is something that has a large enough wiggle room (do we mean mathematically sound? do we mean provable? do we mean no visible defects? do we mean no major defects?) that I don’t think your criticism is super helpful.
The performance bit, as you’ve gone into elsewhere, I understand as “Some game developers misuse tools like Unity to make slow and shitty games, so clearly no game developer cares about performance”. I don’t think that’s a fair argument, and I especially think it’s incorrect with regards specifically here to Acton who has built an entire career on caring deeply about it.
Also, you didn’t really answer my observation about development time inefficiency versus runtime inefficiency and how it applied in Acton’s case.
Anyways, this subthread has gotten long and I don’t think we’re much farther in anything. Happy to continue discussion over DMs.
You’re the one who can’t even refer to web development without needing to use an insulting term for it.
Well, no. The claim generally being made here is that people like Acton uniquely “care about performance”, that they are representative of game developers in general, and thus that game developers “care about performance” while people working in other fields of programming do not. And thus it is justified for people like Acton to dunk on people who explain that the tradeoffs are different in their field (and yes, that’s what the person was trying to do – the only way to get to the extreme “doesn’t care” readings is to completely abandon the principle of charity in pursuit of building up a straw man).
Which is a laughable claim to make, as I’ve pointed out multiple times. Game dev has a long history of lackluster results, and of relying on the hardware upgrade treadmill to cover for performance issues – just ship it and tell ’em to upgrade to the latest console or the latest video card!
And it is true that the performance strategies are different in different fields, but Acton is the one who did a puffed-up “everyone who doesn’t do things my way should be fired” talk – see the prior thread I linked for details – which tried to impose his field’s performance strategies as universals.
Like I said last time around, if he walked into a meeting of one of my teams and started asking questions about memory layout of data structures, he wouldn’t be laughed out of the room (because that’s not how my teams do things), but he would be quietly pulled aside and asked to get up to speed on the domain at hand. And I’d expect exactly the same if I walked into a meeting of a game dev team and arrogantly demanding they list out all the SQL queries they perform and check whether they’re using indexes properly, or else be fired for incompetence.
Ah, thanks for the link. My only choice there was the elision, it was otherwise an exact transcript.
I agree partially. It is a conversation-ending bludgeon, but it was not aimed directly at the interlocutor. Acton’s exact words were “But people who don’t care how long it takes is also the reason why I have to wait 30 seconds for Word to boot for instance.” He was ridiculing the mindset, not the interlocutor. Which by the way was less unkind than I remembered.
I know it rubs you the wrong way. I know it rubs many people the wrong way. I agree that by default we should be nice to each other. I do not however extend such niceties to beliefs and mindsets. And to be honest the mindset that by default performance does not matter only deserves conversation-ending bludgeons: performance matters and that’s the end of it.
Except it’s a stretch to say that the “mindset” is “performance does not matter”. What the person was very clearly trying to say was that they work in a field where the tradeoffs around performance are different. Obviously if their software took, say, a week just to launch its GUI they’d go and fix that because it would have reached the point of being unacceptable for their case.
But you, and likely Acton, took this as an opportunity to instead knock down a straw man and pat yourselves on the back and collect kudos and upvotes for it. This is fundamentally dishonest and bad behavior, and I think you would not at all enjoy living in a world where everyone else behaved this way toward you, which is a sure sign you should stop behaving this way toward everyone else.
You’re entitled to your interpretation.
Other Lobsters are entitled to the original context, so here it is: Mike Acton was giving a keynote at CppCon. After the keynote it was time for questions, and one audience member came with this intervention. Here’s the full transcript:
Acton then interrupts:
Thanks for including the full transcript. I didn’t realize the questioner was a “Not a question, just a comment” guy and don’t think we should judge Mike Acton based on how he responded. Comment guys suck.
This is a guy who thought “Everyone Watching This Is Fired” was a great title for a talk. And not just that, a talk that not only insisted everyone do things his way, but insisted that his way is the exclusive definition of “professional”.
So I’ll judge him all day long.
Thanks for quoting extensively.
I worked with GUIs in a past life, and debugged performance problems. I’ve never seen a 30-second wait that was primarily due to user interface code, or even a five-second wait.
One can alwys discuss where in the call stack the root of a problem is. The first many-second delay I saw (“people learn not to click that button when there are many patients in the database”) was due to a database table being sorted as part of processing a button click. IMO sorting a table in UI code is wrong, but the performance problem happened because the sorting function was somewhere between O(n²) and O(n³): A fantastic, marvellous, awesome sorting algorithm. I place >90% of the blame in the sorting algorithm and <10% in the UI code that called the function that called the sorting marvel.
In my experience, delays that are primarily due to UI code are small. If the root reason is slow UI code, then the symptoms are things like content is updated slowly enough to see jerking. Annoying, sure, but I agree that quick development cycles can justify that kind of problem.
The many-second delays that Acton ascribes to slow UI code are more typically due to UI code calling some sparsely documented function that turns out to have problems related to largish data sets. Another example I remember is a function that did O(something) RPCs and didn’t mention that in its documentation. Fine on the developers’ workstations, not so fine for production users with a large data set several ms away.
I’m sure Mike Acton conflated the two by mistake. They’re not the same though.
I realise that neither the audience member nor Mike Acton mentioned GUI specifically. Acton merely observes that because some people don’t care enough about performance, some programs take an unacceptably long time to boot. I’m not sure Acton conflated anything there.
I might have.
As for the typical causes of slowdowns, I think I largely agree with you. It’s probably not the GUI code specifically: Even Electron can offer decent speed for simple applications on modern computers. It’s just so… huge.
Opps, the commenter did briefly mention “it’s all user interface stuff”. I can’t remember my own transcript…
Acton is still taking the least charitable possible interpretation and dunking on that. And he’s still doing it dishonestly – as I have said, multiple times now, it’s an absolute certainty that he also trades off performance for developer convenience sometimes. And if I were to be as uncharitable to him as he is to everyone else I could just launch the same dunk at him, dismiss him as someone who doesn’t care about performance, say he should be fired, etc. etc.
As I said, you are entitled to your interpretation.
If you compare the graphics of newer games to older ones, generally the newer ones have much more detail. So this improved hardware gets put to good use. I’m sure there are plenty of games that waste resources, too, but generally speaking I’d say things really are improving over time.
Are the games getting more fun?
I haven’t touched a new game in 10, 15 years or so, but I hear games like The Last of Us win critical acclaim, probably due to their immersiveness which is definitely (also) due to the high quality graphics. But that wasn’t really the point - the point was that the hardware specs are put to good use.
I dunno, I feel like games from ten years ago had graphics that were just fine already. And really even 15 years ago or more, probably. We’re mostly getting small evolutionary steps — slightly more detail, slightly better hair/water – at the cost of having to drop at least hundreds and sometimes thousands of dollars on new hardware every couple of years.
If I were the Mike Acton type, I could say a lot of really harsh judgmental things about this, and about the programmers who enable it.
Muratori is on the record saying, repeatedly, that he’s not a great optimiser. In his Refterm lecture he mentions that he rarely optimises his code (about once a year).
The specifics differ (e.g., avoiding L2 cache misses vs. avoiding the N+1 problem), but the attitude of the programmers writing those programs matters also. If their view is one where clean code matters, beautiful abstractions are important, solving the general problem instead of the concrete instance is the goal, memory allocations are free, etc. they will often miss a lot of opportunities for better performance.
I will also add that code that performs well is not antithetical to code that is easy to read and modify. Fully optimized code might be, but we can reach very reasonable performance with simple code.
I think it’s subsequently been deleted, but I remember a tweet that showed a graph, clarity on the y axis, performance on the x axis.
I would say that is most cases it still fundamentally boils down to how data is laid out, accessed, and processed. Whether an SQL query or later processing of data loaded from an SQL query, focus on data layout and accesses is the core of a lot of performance.
I’m normally a fan of Casey’s work, but yeah this one’s not great. The clean code vs fast code battle doesn’t really help the article, and since he “only” demonstrates a 15x speedup it doesn’t actually address the problem we face today.
If all the software I used was 15x slower than optimal, almost everything I did on my computer would complete instantly, but in reality I would guess the majority of the software I use ranges from 1000x to tens of millions of times too slow. Here are some examples I got annoyed by in the last few days:
(These measurements were taken by counting seconds in my head because even with 50% error bars they’re still ridiculous, on a laptop whose CPU/RAM/SSD benchmark within 10x of the best desktop hardware available today)
That sound ridiculously slow, and to me mostly related to macOS. I’m on a decent laptop, using Ubuntu and do no experience such crazy delays.
Which “fast” terminal emulator are you using? I’m using alacritty’s latest version, and can’t notice any delay in either startup or typing latency (and I’m usually quite sensitive to that). Even gnome-terminal starts reasonably fast, and it’s not known for being a speed beast.
For the lldb test, I get <1s both cold and warm, no noticeable difference between the two.
A nonzero amount of it is on macOS yes, e.g. the first time you run lldb/git/make/etc after a reboot it has to figure out which of the one version of lldb/git/make/etc you have installed it should use, which takes many seconds. But it is at least capable of warm booting non-trivial GUI apps in low-mid hundreds of ms so we can’t put all the blame there.
It’s alacritty. I’ve tried it on all 3 major OS with mixed results. On Windows startup perf is good, typing latency is good but everything is good at 280Hz, and it’s so buggy as to be near unusable (which makes it the second best Windows terminal emulator out of all the ones I’ve tried). On macOS startup perf is bad but typing latency is fine. On Linux startup perf is fine but typing latency makes it unusable.
You can have both! Write clean code and optimize those parts that need it. You do measure performance right?
I agree that performance matters, but what’s going on here amounts to optimization, and optimization does not always lead to simpler code. In fact, there’s a statement in here:
I don’t actually agree that the solution there is semantically simpler. Less AST tokens and less code are measures of syntax, not semantics. I view the semantics of this table version as more complex, with the definition of complex being “things that are intertwined.” The structure of each shape is now completely intertwined in this clever coefficient calculation. What happens if we want to calculate the area of a trapezoid? (1/2(a + b) * h). It doesn’t fit neatly into the
CTable[Shape.Type]*Shape.Width*Shape.Height
calculation now.The purpose of separating code via polymorphism is that, instead of finding some clever common expression of a varied problem, you can just have each thing run its own calculations, separately. Sometimes separating calculations like this is more organized, and sure I guess that depends on your personal aesthetic. But, for the new trapezoid calculation, it’s as simple as defining the new calculation, with no coupling to the existing other shapes.
I get that a 10x performance improvement is presented here, and that is definitely worth listening to. But, that’s also what you get when you focus on one dimension and treat other dimensions as less important. Performance is important, but it’s not the only dimension.
Finally, I think the root of this issue is that “clean” code attempts to be more specification-like, and that obviously can come with a performance cost. I think it’s better to legitimately just separate the specification from the implementation, and then you get the best of both worlds. You have a spec that’s focused on clarity and connection to the problem domain, and you can optimize the implementation all you want and not care about clarity because you have a spec to fall back on. The issue is that we’re trying to jam all of these different dimensions into one artifact: a single program’s source code.
It does not (amount to optimisation). This is just avoiding making the program slow with unnecessary indirections & obliviousness to the particulars of the problem you’re solving. You can do that and keep the code very simple. And as far as I can tell he did.
“Avoiding making the program slow” is now my new favorite definition of optimization.
I fail to see how this line of code has anything to do with the problem domain:
f32 const CTable[Shape_Count] = {1.0f, 1.0f, 0.5f, Pi32};
But, since simple has no definition, you are free to call that simple if you like.
If you can’t understand a 6-lines function I can’t help you.
But don’t be ridiculous, you do understand it, and you know how this line of code relates to the problem domain. It’s pretty obvious from the context surrounding it, you’re just choosing to play dumb and ignore it.
The problem domain is calculating the areas of shapes. I have never, ever heard of storing a list of coefficients together in any conversation about geometry. I didn’t say that I don’t understand the code, I said that it is not a faithful representation of the problem domain of geometry.
If you look up shape calculations in any math textbook, it would have the area calculations grouped together with each shape. In fact, that’s the organization of the first link I clicked when searching “math shape areas.” That is the clear domain-oriented grouping, whereas the code is grouped for efficient calculation. Aka it is optimized.
Other than that, you seem to be quite angry about this conversation. It helps to stay focused on the actual points of discussion, vs. having a conspiracy theory about my intent here. And it’s also an ok outcome for us to disagree.
I have two choices here: either I believe you when you say you don’t know see how this table relates to the problem domain… or I don’t. You either are dumb, or play dumb. If you are dumb (or untrained), I can’t help you, and the conversation stops. Re-watch the videos, do some more programming, go back to school, that kind of thing. If you play dumb we can still talk but then admit you were lying for whatever reason.
My hypothesis right now is that you were playing dumb for rhetoric effect. It kind of got to me.
I understand this line of code is not obvious at a glance. If this was real code and not a video a comment would definitely help. Still, who cares that storing lists of coefficients doesn’t come up in conversations about geometry? This is not a conversation about geometry, this is a demonstration of how we might tell the computer to solve a specific problem, which happens to involve geometry. And this table driven stuff is relatively common, it’s applied to all kinds of problems.
You dispute the simplicity of this code, but it looks like your real reason is a lack of familiarity. Programmers familiar with table driven code would on the other hand feel this is very simple, almost obvious. But feelings are an unreliable metric. Code size however, is more objective than we realise: it’s one of the best proxy for complexity we have: very cheap to measure, and strongly correlated with pretty much all the other complexity metrics (as well as cost, bug count…). The Making Software book mentions this.
There are exceptions of course, but I don’t think this is one of them. I can’t justify it any further, but I really think this code is as simple as its size suggests.
It is. I just like to dig in and at least know why we disagree. Doesn’t work out often, though. :-/
Let me try and come at this another way, because it seems like we have different definitions of the phrase “problem domain.” This is likely the root of our different mindsets. For me, the problem domain is completely independent of a computer. When we’re talking about shapes and their areas, the problem domain is geometry. Abstract geometry, i.e. only in a mathematical sense.
If you asked a mathematician to write some pseudocode for describing area calculations, they would write something like this:
That is, they’d group the area calculations along with their shapes. And here’s a link that confirms my belief here - shape definitions grouped with their area calculation.
Are you familiar with Domain Driven Design? If not, the idea is that it’s important to capture business rules in the language of the problem domain, separating those rules from the concerns of computers as much as possible. It’s not a law, but it’s a point of view - you can disagree with that point of view, but I wouldn’t call this philosophy “dumb.”
You used the word “computer,” here. Are you starting to understand our differences better yet? I’m talking about how to represent the problem in a way that’s agnostic of the underlying machine. For you, they are intertwined. I align with the DDD philosophy in general, and also the old SICP quote of “Programs are meant to be read by humans and only incidentally for computers to execute.”
I understand what you’re saying here in that the table-based code is easy enough to understand. With a problem this small, the differences are almost meaningless. But we’re using this small example as a proxy for talking about software in general. Maybe my generalizing is causing you to think I legitimately don’t understand how the table-based code works. I understand how it functions - it’s trivial.
What I’m trying to do is use this example to show that I believe in separating specification of a problem from its implementation. In fact - how would it make you feel if I agreed that ultimately the table-based code is a good enough implementation, but it loses out on some clarity as a specification of the problem? Performance is very important in a final implementation, though I’m personally willing to sacrifice that to keep the code more specification-like. I understand some people aren’t, and wouldn’t call them “dumb.”
I do agree, actually: there are indeed better ways to make the code look like specification. (I don’t think the “clean code” version is that, but that’s a separate debate.)
The way I understand it, the Domain Driven approach is clearly aimed at non-programmers. People who don’t care about computers, and just want to express their problem in the language they’re most familiar with (and if the field is any good, that language will be close to ideal in simplicity and expressivity). In this toy example it would be important to group the definition of shapes and their area calculations, because that’s what domain experts do. I actually agree with that approach: giving domain experts an unambiguous way to express their wishes and have those realised in a short feedback loop or real time is extremely valuable.
Doing so however completely ignores the computational problems beyond big O (and sometimes even big O). And though in many cases the computer will be fast enough anyway, in many other cases it won’t be. That’s where programmers come in: people who can encode the solution in a way that will work well enough on the computer: that has few enough bugs, uses few enough resources, and runs fast enough.
To do that we often can’t ignore the hardware. There’s one or more CPUs, a cache hierarchy, disks, a network, various throughputs & latencies… and beyond being a fairly messy reality some of those limitations are fundamentally baked into the universe in the form of the speed of light or thermodynamics. See for instance Daniel J. Bernstein paper on parallel brute force attacks, that talks of hardware that doesn’t exist, but could.
So yeah, a programmer is likely to take your problem domain and transform it into an utterly unrecognisable program, using techniques only programmers know. Even if she writes her program for other humans to read, those humans are programmers too, and they ought to recognise the techniques used there. Thus, the cost in readability among programmers for not ignoring the performance characteristics of the machine, may not be all that prohibitive, even though the poor end user is completely lost. But that’s why they called the programmer in the first place, didn’t they?
Another thing to keep in mind is that the faster & leaner our programs are, the more we can ultimately do: more programs, bigger data sets, more stringent usage patterns… so even if we’re not initially constrained by the hardware, as demands grow we inevitably become so.
This could explain why Muratori hates Python so much: it’s not a programmer’s tool. It ignores the hardware too much, wastes a ton of resources and time at the altar of productivity. And that’s fine for the Domain Driven approach, where one just want to solve a problem in the most expedient way possible, using a notation that’s not too far from their domain.
This Domain Driven advantage is why I do use Python, even in cases where I think it’s way too slow (I hate waiting for it to generate my test vectors). Because sometimes what I really need is a modeller’s language.
I feel this article is just a backlash against the haters who keep telling the author that they’re writing terrible code, never mind the fact that the code is both correct and fast. I’m not in game development, but a backend dev for web applications, where I’ve had similar experiences with people telling me my SQL-generating code is hard to understand without offering a proper alternative that performs at an acceptable level. It’s rather aggravating, to be honest. Typically, you don’t want to write difficult code (of course code should be elegant!), but the various constraints can sort of “push” your code into a certain shape.
Perhaps it’s because I’ve also really come to appreciate other people’s code that might be difficult to understand initially, but once you understand it is actually quite hackable, like GC, compiler internals or regex engines. These things really resist being written in the “clean” (read: naive) way, because they have to take into account various lower-level machinery that seeps up into the higher levels (think for example data-oriented programming). That doesn’t mean that they have to be complicated, they’re just a little difficult to initially grok.
At the same time, many of the “clean code” adherents write horrible monstrosities of abstracted-out code that’s both hard to understand initially, hard to maintain, and has shitty performance. (
</rant>
)Why should there be backlash though, when we all know there are an enormous amount of constraints like this, that are mostly all conflicting? Software is hard. People try to make sense of the complexity with simple rules, but simple rules don’t work.
Isn’t the answer to say: “I’m ok with a performance penalty here, because…” or “performance is the most important thing here, because…” ?
Of course it is! The backlash is against the haters who seem to believe every piece of code should be as readable as an example out of a textbook.
Others folks have pointed to this, but the main difference is is context–in a game, if you can’t keep to your frame budget (say, make 60 FPS on a modern PC, where nano/microseconds can add up) then that can lead to poor reviews, and significant loss of potential revenue. Conversely, for a regular line of business application, then unless you’re huge, then the kind of performance loss he mentions may mean you need to use a few more CPU cores. There are of course absurd cases (like using cubic algorithms where a linear one would work)
As a broad statement, about seven years ago one developer day (as charged out by a consultancy) cost about the same as two m1.large EC2 instances for a month. So if you can save a significant amount of compute cost then that might be worth a few days of work. Conversely though, there’s the opportunity cost of what else they could be working on too. Eg: additional revenue from a new feature might well dwarf any cost savings on compute.
I’m old enough to remember the jokes about Crysis’ hardware requirements (and even older occurrences of games requiring top-of-the-line hardware that didn’t become full-blown memes).
I remember playing the original Kerbal Space Program happily on an old Macbook Air. Meanwhile, KSP2 does not look like a giant leap in quality, but has jaw-dropping hardware requirements.
And the literal best-selling video game in history – Minecraft – has a whole stack of community-maintained addons dedicated to making its performance acceptable.
And this is without getting into AAA games and their multi-gigabyte launch-day patches, etc.
So my response any time someone starts talking about games are some unique field where people not only care about, but have to care about performance and correctness, is unprintable.
Games are not special. Games are just as bad, when considered fairly, as every other kind of software out there. Game developers also are not special. Are there some who do aggressively care about performance and correctness? Sure. There are also people in other fields of programming like that. Is Casey Muratori or Mike Acton representative of the average game developer or even of the average game developer’s mindset? Pardon me while I catch my breath from laughing too hard.
… in the video, there’s text on his shirt, and it reads the correct way, and he writes in front of him, and it also reads the correct way, although he’s viewing the text from the opposite site that the viewer is viewing it from. If it’s correct for us, it’s backwards for him, and vice versa. Is he writing backwards, or did he get a backwards shirt specifically for this and flips the whole video?
The shirt is printed backwards. He’s a righty, but appears to write with his left hand.
Yes, he’s writing backwards on a pane of glass.
No, he just flips he video (as if using a mirror)… and has a backwards printed shirt.
Funny, I just assumed because I’ve seen other people do it the backwards writing way. Getting the backwards printed just for this instead of a wearing a solid one is dedication.
The polymorphic code would be competitive if the polymorphism was handled at the array level (instead of one array of objects we’d have one array of triangles, one of squares, etc). This avoids the vtable lookup (or amortizes it, depending on how you do it); is robust against adding shapes whose area calculation doesn’t fit the table lookup approach; and would avoid over allocating on the simpler objects (now squares and circles are structs containing a single float).
It’s also worth noting that the compiler can devirtualise the polymorphic code if it knows that the set of classes is closed, which is the assumption that the programmer is making with the other implementation.
How do you tell the C++ compiler that the set of classes is closed? Does it generate a switch in that case?
If the superclass is in an anonymous namespace then it knows. I don’t know if it will lower to a switch in this case, the devirtualisation support in LLVM is still pretty new, but it’s quite plausible. Whether it’s a good idea is more complex, it depends on whether inlining exposes more opportunities for optimisation and on whether the costs of the methods are small relative to the cost of the call. This is often not the case for C++ code.
This has already been posted here: https://lobste.rs/s/7yd1id/clean_code_horrible_performance
I see the goal of clean code to favor the future, fast code to favor the past.
I always hear a particular metric about coding is that is 80% is reading, 20% writing. So it is natural for me to pay attention to the former (even when I am the only member of the project, since I have to read that again after years maybe).
Another thing is, I never saw simple code to fulfill a complex requirement, ever. Even if the code seems simple, complexity may be shifted in a different layer.
It stood out to me when I read this (I know everyone says it)… what percentage of coding is it that we expect running to take? 80% reading, 20% writing, 0% running? I’m not arguing with you I am just surprised now that I think of it that this saying completely ignores running.
That’s a really good aspect! After I gave it a few toughts, I think “coding” (either reading or writing) drains my cognitive capacities, therefore I want to keep it optimized. Running software (or compiling, rendering, testing, etc. for that matter) are done in my “idle” time, taking the hardware resources instead of mines.
If it would only be me whose resources I want to spare with this approach, I’d call it selfish, although there could be many of my teammates who would also suffer from non-clean code and I don’t want to hurt them for obvious reasons.
Devs are notoriously “picky” of what and how they are working with (looking at you, DX).
If I were developing the kinds of applications Casey was, I’d probably agree very strongly with these points.
I find in application development where you are working with large abstractions below you (browsers, frameworks), some version of “clean code” often leads you towards the fast paths.
For the few times you’re coding in the hot spots, yea throw away “design patterns”, make it fast, but its insane to me to think of these as mutually exclusive.
Another distinction I like to make is that “clean” library code, looks very different than “clean” feature code.
This is pretty awful advice. As David put it, he is beating the hell out of this strawman. The framing here is terrible. Treating “clean code advocates” as some boogie man attempting to slow down your code is… weird.. to say the least. Is he actually advocating we not use polymorphism? Yes, switching to a table lookup is faster, but how often is that actually applicable to your situation? The shape example is obviously not representative of real uses of dynamic dispatch.
Software is not slow these days because everyone switched from lookup tables to dynamic dispatch. Do not teach newcomers this.
Why is it slow then?