I think I may have another silvery hued bullet of my own. It’s disappointing really, but to my dismay I find myself able to apply it time and again: have a taste for simplicity. I know it sounds like “just fucking git gud”, but it’s more specific than that. Time and again, I see code that is so needlessly complex that I easily can (and sometimes even do) rewrite it and divide its size by 3 to 10.
I get that people are sometimes under pressure and need to do things quick. That’s not it. The code they wrote clearly took longer to write and test than the simplified version I can think of just looking at the API they implement (and that’s before I even question the API itself, there are some jarring mistakes there too).
Yet I’m pretty sure I am not some hot shot that’s so much better than everyone else. I’ve discussed with enough colleagues and interviewed enough juniors to disabuse me of that notion. Even the ones who committed the horrible code I’m complaining about are fairly smart. It’s something else.
Now “simplicity” is very vague, but there’s a more specific low-hanging fruit: modularity.
John Ousterhout speculates that problem decomposition is the single most important notion in all of computer programming. I basically agree, though I tend to think of it in terms of source code locality instead. Our modules should be deep, with small interfaces and significant functionality.
I have a sneaking suspicion that most of Brook’s colleagues actually had a taste for simplicity, and perhaps even most professionals of the time did. Casey Muratori once suggested that one reason is they didn’t have a choice, the machines they were working on were just too small and too slow to tolerate unnecessary complexity. Now that our machines have gigabytes of memory and even more effective operations per seconds we lost that feedback, and many of us failed to acquire the taste.
Hardware getting better allowed software to get worse in some ways. If there’s no silver bullet, here’s the next best thing: stop feeding the werewolf.
I have a sneaking suspicion that most of Brook’s colleagues actually had a taste for simplicity, and perhaps even most professionals of the time did. Casey Muratori once suggested that one reason is they didn’t have a choice, the machines they were working on were just too small and too slow to tolerate unnecessary complexity.
This claim is at odds with, well, basically the entire thesis of The Mythical Man-Month. Here’s Brooks in the original preface:
The effort cannot be called wholly successful, however. Any OS/360 user is quickly aware of how much better it should be. The flaws in design and execution pervade especially the control program, as distinguished from the language compilers. Most of these flaws date from the 1964-65 period and hence must be laid to my charge. Furthermore, the product was late, it took more memory than planned, the costs were several times the estimate, and it did not perform very well until several releases after the first.
Stories like this are common in the literature and folklore of early computing. I’ve lost track of how many times I’ve heard stories of a wise senior programmer (who it is, and what the project was, tends to change with the telling) who allocated a buffer of memory at the start of a project, hid it, and then did nothing with it, because the team would then unknowingly be bound by a smaller memory budget, but one that could, importantly, be increased once they inevitably failed to stay within it. If everyone is scrupulously cherishing every byte and every cycle as a precious resource never to be wasted, the wise senior programmer would not need to resort to such tricks!
So in general and on the strength of available evidence, I think we must reject the nostalgic view that our predecessors were titans of frugal resource usage imposed on them by their machines; they were more or less the same as we are now, and many of the things they built were slow, resource-hungry, over-complex, and buggy just as many of the things we build today are. The main source of this nostalgic myth, I believe, is a form of survivorship bias: the things that have come down to us today to that we are encouraged to study and learn from are part of the small percentage of all software of that era which actually achieved goals of simplicity, frugality, etc., because nobody bothers to keep alive the study and enjoyment of the ones that didn’t.
Err… it feels like your two examples are actually making my case for me: even though Brooks was working on mainframes, those machines were much smaller than they are now, and the poor performance and too big a size were only noticed (and eventually corrected) because of this. Complexity on the other hand they may have had more room that if they were programming for a personal computer. Same thing for the “allocate a buffer then miraculously find memory before we ship the game” story, it’s about artificially constraining memory so even if the projects overflow the arbitrary limits it doesn’t overflow the real one.
As for being late, I don’t think the size of the machine matters at all. I can see projects slipping even on a tiny microcontroller with KiB of memory.
And of course, I don’t think for a minute the people from that era were intrinsically better than we are. But they did live in a different environment, where selected into their position from different criteria, and had a different education. They’ve got to be different. In what ways I don’t know, and here I was focusing on a single aspect of their environment, and how it might have influenced them.
Every generation of programmers looks back at previous generations and thinks “Wow, they must have really cared about performance to come up with those amazing techniques for working in such limited hardware!”
But the current generation of programmers also always thinks “Wow, I sure am glad we have the near limitless resources of this generation of hardware!” And every generation has people who insist that the bounties yielded by Moore’s Law, which were carefully and respectfully made use of by previous generations (who had to care because the hardware forced it on them) are now being wasted by the current generation of profligate developers who have lost the ethos of caring about performance.
In other words, every generation sees their iteration of hardware as a liberation from the straitjacket the previous generation had to program in. But the previous generation always saw their hardware as a liberation from the straitjacket the generation before them had to program in. And on and on, probably all the way back to ENIAC.
Every generation of software expands to fill the corresponding generation of hardware. Thus it has always been. Thus it always shall be. Every generation of programmers is, on average, average. They tend not to feel the limits of their hardware very much, because to them the current generation is always a big improvement over the last. The software that survives to be studied and recommended for study across multiple generations, however, is anything but average, and using it (or the programmers whose work it was) as representative of its era or of the ethos or job-selection requirements of its era, will lead you astray.
I understand selection bias, but it’s not just that. I have used computers since I was 10, which gave me roughly 30 years worth of memory. I’ve played quite a bit of time with my Dad’s Atari ST, then saw the rise of the IBM PC clones since Windows 95 came out… you get the idea. I remember compiling Gentoo packets on my Core2 Duo laptop, and it took ages, I remember boot times being quite a bit longer than they are since the advent of solid state drives, and of course I’ve seen the ungodly progress we’ve made in real time computer generated graphics.
At the same time though, many programs that gained comparatively little functionality, are just as slow now as they were 20 years ago. My phone slows down just because I update it. I’ve installed pretty much nothing, I regularly wipe old data, but no, memory gets tighter and tighter, applications laggier and laggier… even going back to the home screen, that used to be instantaneous, now often takes more than 10 seconds.
I would somewhat understand if programs went slower and more wasteful as hardware gets faster and bigger. It’s a reasonable trade-off to make, to a point. My problem is when that slowdown outpaces the unreal speed at which hardware improved over the last 30 years I could personally observe. I believe that in many cases, we are way past the point where wasting more computational resources helps us deliver a product faster.
Casey Muratori once suggested that one reason is they didn’t have a choice, the machines they were working on were just too small and too slow to tolerate unnecessary complexity.
But there never was a time when this was the case. In every era, programmers are limited by the hardware they work with, but in every era they approach it not as “I must be frugal and responsible, since I have only a few kilobytes of memory”, but as “Wow, I have kilobytes of memory? I can do so much more than before!”
At the same time though, many programs that gained comparatively little functionality, are just as slow now as they were 20 years ago.
I’ve mentioned this example before, I think to you specifically, but: I like to shop in a local Japanese market. Yet I don’t speak or read Japanese. 20 years ago I’d have been out of luck. Today, I can pull my phone out of my pocket, point it at a label or a package, and it does live text recognition and live translation of that text. I literally am blown away every single time. The devices I have today can do so much more than the ones I had twenty years ago that I simply cannot for the life of me fathom the stance that they’ve gained “comparatively little functionality”.
My phone slows down just because I update it.
Back in the day “updates” came on a stack of floppy disks and most people didn’t bother with them. Which is one way to avoid problems from updates!
And then when we started getting updates downloadable over the internet, people started yelling at companies to extend support back further and further, claiming that anything else is “planned obsolescence”. Meanwhile, companies that make software don’t want to be frozen to the capabilities of the oldest supported hardware. So they kludge it, a time-honored tradition reflected in just how old that bit of jargon is, and the result is stuff that runs slower on old hardware.
But the idea that we are in some sort of era where people are uniquely wasteful of hardware resources, or uniquely uncaring about performance, is just completely factually wrong. The average programmer and the average software of 10, 20, 30, 40 years ago were not noticeably better or more frugal or more careful of performance relative to the hardware they had than programmers and software today.
Programming was different because the machines were different.
Some skills (not all) that were required then are useful now, and will be for the foreseeable future.
Since those skills are no longer mandatory, they occur less frequently.
Software is more wasteful than it would have been if those skills had been perfectly retained.
in every era they approach it not as “I must be frugal and responsible, since I have only a few kilobytes of memory”, but as “Wow, I have kilobytes of memory? I can do so much more than before!”
The actual mindset they had matters much less than what the computers allowed them to do.
But the idea that we are in some sort of era where people are uniquely wasteful of hardware resources, or uniquely uncaring about performance, is just completely factually wrong.
I remain unconvinced. The only indisputable fact I see right now is how much more powerful our computers are. We’ve gone from KHz to MHz in a couple decades, that’s 6 orders of magnitude. Such a difference in degree, even gradual, is bound to introduce differences in kind.
I also never pretended older programmers cared more about performance (and, for the KiB computers and less, simplicity). Like everyone else they likely cared first and foremost about making it work. But when your computer is small enough or slow enough, the more wasteful approaches we can afford today simply did not work. The minimum performance bar was just higher, and they met it because they simply had no choice.
Where I disagree with you is that I don’t think there ever was any sort of conscious/deliberate “skill” of carefully and frugally making use of limited resources. There was just software that didn’t do as much as today.
Like, there’s a reason why all those classic games have super-simple low-res graphics and tiny color palettes and had to have extremely simple movement options, etc. And it wasn’t because the programmers who made them had lost some skill of getting more out of the hardware.
But when your computer is small enough or slow enough, the more wasteful approaches we can afford today simply did not work. The minimum performance bar was just higher, and they met it because they simply had no choice.
The “minimum performance bar” was not higher relative to the hardware of the era. Every era has had some software that was fast for the time and some that was slow for the time and a lot that was just average.
And relative to what the hardware of the time was capable of (which is why I emphasized that in my last comment) lots of software of previous eras really was slow and bloated. Really. Yes, really. There was no Apollo-13-style “poor performance is not an option” stuff going on. There was lots of really awful crappy terrible slow software being written. Tons of memory and tons of CPU cycles wasted.
Please, stop promoting the myth that there ever was anything more to it than that.
I think the point here is that there are relatively “simple” programs which were feature complete (in the eye of the beholder). Programs with similar functionality are nowadays as slow or slower than those programs in the early days. That makes no intuitive sense to the user - if the same old program would be run today, it would be super fast.
It would make logical sense that a newly built program which does the exact same thing nowadays would be much faster, except that’s not typically the case. Newer programming environments offer conveniences to the programmer which are invisible to the user, but do have an additional performance impact.
For example, if the “fast” program back in the day was hand-written in assembly or C, the same program nowadays might be written in Java or Python or what have you. Nobody in their right mind would hand-write large programs in assembly if they have the choice. C is also quickly falling out of fashion.
As another example, a DOS program had the CPU all to itself. A similar program running under Windows or even Linux would have the OS and background processes to contend with, so it would necessarily feel slower.
Does it make sense? On the whole, I am not sure. We (think we) need more and more software, so it does makes sense that we’re able to produce it faster and more efficiently. What user really wants to go back to running everything single-tasking in DOS? What programmer really wants to go back to writing everything by hand in ASM/C (including device drivers)?
Where I disagree with you is that I don’t think there ever was any sort of conscious/deliberate “skill” of carefully and frugally making use of limited resources.
I did not mean to say it was conscious or deliberate.
There was just software that didn’t do as much as today.
Mostly, yes. Computer games especially, with how they compete for triangles and pixels. But we also have software that doesn’t do much more today than it did 20 years ago or so, and somehow manages to not be any faster. It starts being seriously dated, but Jonathan Blow’s Photoshop example was striking.
A likely important factor is how competitive a given sector is. Games are highly competitive, and an excellent game that lags will be played less than an excellent game that does not. At the other end of the spectrum I suspect Photoshop is almost monopolistic, with a huge captive audience they’d need to seriously piss of before they all move to Gimp or Krita.
Games are highly competitive, and an excellent game that lags will be played less than an excellent game that does not.
The best-selling video game of all time, notorious for the sheer amount of time its players sink into it, is also infamous for its terrible performance, to such a degree that many guides recommend, as one of the first things you do, installing a mod pack whose sole purpose is to try to make the performance into something reasonable.
That game is Minecraft.
And it is not alone, nor anywhere near; the games industry is well known for shipping things that are broken, buggy, slow, resource-hogging and/or all of the above. So much so that it’s spawned endless memes.
And I’m fairly certain this has all been pointed out to you in prior iterations of this debate.
I’m aware that gameplay, time of publishing, and marketing, affect game sales. I reckon they do significantly reduce the selection pressure on criteria such as graphics quality, loading times, input lag, and frame rate.
They do not eliminate that pressure though. Not as effectively as a near-monopoly or vendor locking would.
Your argument would require that we somehow see a higher standard of performance coming from game developers.
The empirical reality is we don’t. Game dev is not some special realm of Performance-Carers. Nor were programmers of previous eras particularly concerned – relative to the capabilities of their hardware, they wrote plenty of software that was as slow as the things you complain about today.
You really, really need to learn to accept this, ditch the mythologizing, and move on.
You keep painting my comments as if I was saying some groups of people were more virtuous than others. I keep insisting that different groups of people are subject to different external constraints.
Let’s try an analogy with cars. If fuel prices shoot through the roof people will quickly start to pay real close attention to fuel efficiency before buying a new car, creating a market pressure that will force manufacturers to either produce more efficient cars, go out of business… or form a cartel.
You keep painting my comments as if I was saying some groups of people were more virtuous than others. I keep insisting that different groups of people are subject to different external constraints.
Because the underlying message always is that certain groups “care” about “performance”. This isn’t the first time we’ve gone round and round on this.
And again, the simple empirical fact is that in every era there are people like you who complain about a lost art of performance and simplicity. What looks to us, now, like constraints that developers of the past must have had to come up with explicit strategies for, were to them at the time rarely perceived as constraints at all, because they felt liberated from how constrained the hardware of their recent past was.
If fuel prices shoot through the roof people will quickly start to pay real close attention to fuel efficiency before buying a new car, creating a market pressure that will force manufacturers to either produce more efficient cars, go out of business… or form a cartel.
For like the fifth time now across these threads: the empirical reality of the game dev industry is that they do not seem to feel performance is a constraint upon them. Games with horrid performance and resource usage are put out all the time and succeed in the market. Minecraft, which did so long before it was bought out, is a great example of this and also of the fact that no “cartel” is necessary to artificially protect games that have poor performance.
Because the underlying message always is that certain groups “care” about “performance”. This isn’t the first time we’ve gone round and round on this.
Not my message. Perhaps initially, but I got your point. Please don’t put words in my mouth.
no “cartel” is necessary to artificially protect games that have poor performance.
Of course not. I wouldn’t dare even suggest such a thing.
For like the fifth time now across these threads: the empirical reality of the game dev industry is that they do not seem to feel performance is a constraint upon them.
That really depends on the game.
Factorio devs reported explicitly spending a ton of time on explicit performance optimisations so the factory could grow to ungodly proportions.
Jonathan Blow reported that since he wanted unlimited rewind for his puny 2D platform game, he had to spend time making sure the inevitable actual limit was high enough not to be perceived as such by the player (he settled for 45 minutes).
I can look at Star Citizen and see players complaining about assets failing to load in time, and they end up falling through the entire map (as an example as poor performance affecting correctness).
In competitive multiplayer FPS, even a single dropped frame could make the difference between a head shot and a miss, not to mention network problems. Could they keep the player count high if the game routinely lagged?
Almost all 3D game I have ever played on a PC the last 20 years had graphics settings, and in most cases going to the max tanked the frame rate to a perceivable level. Not only I as a player felt the difference, but most importantly the game devs gave me the option… as if performance were an actual constraint they had to address.
Of course, you could come up with 10 times as many examples where performance matters so little you’d go to have out of your way to make it lag (typical of most skinner boxes for palmtops). There’s no unified “game industry”, just like I’m increasingly realising there’s no “embedded industry”: it’s a collection of sub-niches, each with their own constraints.
Still, a significant number of those niches absolutely feel performance constraints.
The fact that you’re not even able to talk about this without needing to resort to loaded/judgmental language is a pretty big deal. Oh, see, the True Game Devs™ really do feel the constraint and really do care about performance – it’s those developers of “palmtop” “skinnerboxes” who are being inappropriately held up as counterexamples.
Meanwhile the catastrophic-failure-of-the-week in game dev is Cities: Skylines II, which was apparently close to functionally unplayable on launch day due to abysmal performance. That’s not some rando mobile gacha “skinnerbox”, that’s a big-time game from a big-time publisher.
It really is time to just let it go and admit that in every era and in every field of programming the average programmer is average, and there was no special era and is no special field in which the Performance-Carers were or are uniquely clustered.
Photoshop is dealing with constraints that a game (even an AAA game) doesn’t have - like backwards compatibility, and safeguarding the user’s input against data loss and corruption. It’s not a complete explanation for PS’ perceived performance issues, but stuffing everything into the framework of “how fast can you sling stuff onto the screen” is one-dimensional at best.
Photoshop is dealing with constraints that a game (even an AAA game) doesn’t have - like backwards compatibility, and safeguarding the user’s input against data loss and corruption.
Still, I don’t see the link between that, and the nearly 100-fold increase in boot times.
Why, during 20 years of hardware improvement, did Photoshop kept requiring 7 seconds to boot?
Why does the modern (2017) Photoshop version’s main menu takes 1 full second to drop down, even the second time around?
Backward compatibility doesn’t explain anything here: we’re loading a modern image format, not a Photoshop work file. And sure this image has to be turned into a Photoshop specific internal representation, but that internal representation doesn’t need to follow any kind of backward compatibility. That’s a problem when (auto) saving the works, not for loading it from a JPEG. Likewise, what data loss and corruption can happen during boot?
Having a lot of features is not a reason to be slow. I can perhaps forgive video games for their even slower boot times (The Witness, Factorio), but those load much more than code to memory, they also have a ton of assets they chose to load right away instead of streaming them during the game. What’s the Photoshop equivalent?
That’s a question for many programs by the way: how much data does a program need to load to memory before it can reach an operational state? My guess is, if it takes over 10 times the equivalent fread(3) call, there’s a huge margin for improvement.
I was recently playing an AAA game which had horrendous load times on my platform (Xbox One), so much so that I usually started the load, then did stuff like emptying the dishwasher while it reached the state where I could start playing. Once it was loaded though, the experience was fast and bug-free.
I’m not a heavy user of Adobe’s products, but it’s possible that their users are fine with a long start-up time as long as the flow within the program is smooth and non-laggy. You start up the application in the morning while fixing coffee, then work through the day.
I was recently playing an AAA game which had horrendous load times on my platform (Xbox One), […]
Strange, I’ve heard game consoles usually have certification processes that are supposed to limit load times. Seems I’m not up to date on that.
I’m not a heavy user of Adobe’s products, but it’s possible that their users are fine with a long start-up time as long as the flow within the program is smooth and non-laggy.
Except in Blows demonstration in 2017, it was not. The main drop-down menu took 1 full second to appear, and not just the first time, so whatever it did wasn’t even cached. It’s but one example, I expect other parts of Photoshop were snappy and reactive, but if something as basic as the main menu lagged so much we’re not off to a good start.
Now “simplicity” is very vague, but there’s a more specific low-hanging fruit: modularity.
I think this is one of the places where I lean towards a stronger form of the weak linguistic relativity hypothesis for programming languages. I spent a lot of time writing Objective-C and I found that, in that language, 90% of what I wrote ended up being things that could be pulled into different projects or repackaged in libraries. This is not true of any of the code that I wrote before learning Objective-C and, especially, the OpenStep libraries, in spite of the fact that Objective-C was something like the 10th language that I learned. Having started written code in that style, I tend to bring the some of same concepts (modularity, loose design-time coupling) to other languages.
It’s worth noting that, at around the same time as inventing Objective-C, Brad Cox wrote an essay (that I seem unable to find today) with the title ‘What if there is a silver bullet and the competition gets it first?’ where he strongly advocated for modularity as a key design idea. He proposed designing two categories of languages:
Those for building components
Those for integrating components.
His idea for Objective-C was as a language for packaging C libraries up into a way that Smalltalk-like languages could use them easily. These days, a lot of interesting projects are using Rust as the language for writing components and Python as the language for integrating them.
I think there is a case for figuring out the right amount of breaking-things-down, but I also think we are nowhere close to figuring out what that right amount is or how to go about it. And systems and languages that prize “modularity” seem to turn into intractable messes of abstraction more often than they turn into paragons of easy-to-understand simplicity.
The complexity is usually imposed by the customer. I can’t count the number of times I had a great idea for speeding up and simplifying some code, only to discover some pesky requirement was cutting off that particular approach.
That’s why code design should be a two-way communication between the business and the implementor. There are cases where the business side over-specify something for no good reason, making efficient implementation impossible. But of course, there are legitimate cases where it’s just a hard requirement.
There’ve been many cases of a “must have” hard requirement which we implemented and then it turned out nobody used it. Customers often don’t know what they need either. Or sometimes a feature is about prestige, like the boss’s spouse would love this so there’s no way it’s going to be axed regardless of how little sense it makes.
The primary barrier to writing large scale software is complexity. I see developers focus on everything else, like efficiency and local readability and local robustness, which, yes, are important too. But not focusing on the interfaces between things, or even seeming not to really grok what an interface between components is, so that the user of any component needs to understand the details of that component to use it. And not realizing that there’s a dozen ways of writing any chunk of functionality, and some will be less than half the size and complexity of others. And then there’s 100,000 lines of code, and, lo and behold, it doesn’t work quite right in the edge cases, and it takes days to debug things.
This week famous YouTuber Hank Green talked about his bought with cancer (link in the doobldedoo). He discussed how people are frustrated that we haven’t cured cancer; how cancer isn’t one thing, but actually many distinct things; and how we have actually made progress on individual types of cancer. He says we likely won’t ever cure “cancer” because cancer is a category of things, not a single thing.
Similarly, building software isn’t a singular activity. It’s actually a lot of different activities. Finding a silver bullet that improves the efficiency of all programming endeavors 10x is either very difficult or near impossible. Rather than chasing efficiency across the board, we likely need to focus on particular domains in programming. To that end, I was happy that @ubernostrum mentioned tools like Ruby on Rails and Django.
Ultimately, I am reminded of Proebsting’s Law:
Compiler Advances Double Computing Power Every 18 Years
If you want 10x improvements, you will need to analyze your domain. Otherwise, the improvements seem to come fairly slowly.
Whenever talk of silver bullets comes up, I feel like people discuss it as though most people were operating in a Pareto optimal fashion. Rather, we are all probably operating at some level of inefficiency. To me that means that the focus should be shifted towards distributing previously realized gains. Rather than thinking about a technology that would 10x someone operating at optimal efficiency, we should instead look towards people operating far below the optimal curve and bring them to closer to the edge. This means better education, better documentation, better tooling.
But just like I said in the first section, it’s easy to talk in a monolithic abstract, but we’re really talking about many subdomains. Nothing is ever easy.
Consider this passage from the original essay:
How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.
One activity that can result in a high degree of inefficiency is communication. The number of two way communication pathways in a group is n^2. This strikes me as an important source of accidental complexity in large projects. Crucially, this is not technological, but organizational. This provides important context to many engineer’s career paths. Staff+ engineers spend more and more time communicating because this is where the real gains are in large organizations.
I have started thinking more about knowledge as a graph. Communication on topics that require a few node jumps in the graph tend to be more difficult. Discussing a topic one node away can lead to breakthroughs. Ensuring a team has context on the relevant nodes in the graph can lead to fruitful conversations. Failing to do so leads to misunderstandings and frustration.
Unfortunately, accurately judging peoples’ “knowledge graphs” is very difficult. YMMV
To wrap up a long post:
software is many things, not one thing
the big advances will likely come within particular domains
the biggest gains are likely to be made by distributing existing technology or practices to more practitioners
communication is probably the biggest source of inefficiency that applies across the board
Compiler Advances Double Computing Power Every 18 Years
It’s worth noting (and directly supports your point) that this is a decidedly non-linear process. For example, autovectorisation with SSE gave a factor of four speed up for some code, no benefit at all for a lot of things, and a slowdown for a few things. Vectorisation has improved quite a lot, but the best-case speed up is still a factor of four (sometimes a bit more from reduced register pressure) for 128-bit vectors with 32-bit scalars. Similarly, improvements in alias analysis allow kinds of code motion that have a big impact on some code structures and no benefit on others.
The message bus feels like an implementation detail of a deeper thing, which is observables. Just like MVC is what happens when you insist that you should be able to add and remove and adjust different views without having to teach every other view about the changes, I feel like the author is asking for the same decoupling where I can hook into activity in my system.
But then you run into 1) consistency/transactions and 2) how do you avoid wiring up loops in your system of observables that can send your system into oscillating or exploding states.
Another way of going after this is in the relational model. Like observables in MVC decouple changes in views from each other, relations decouple aspects of entities from each other. I can in theory write triggers on one table that cause these kind of secondary effects. This can solve the consistency/transaction issue (though it introduces all kinds of fun new problems of teams clobbering each other), but doesn’t really help with oscillation.
Re: 2, one answer is to focus on the goal of achieving state synchronization rather than transmitting state change. Doing so yields eventually-consistent protocols as a corollary. You still have to be careful which parties in your system are authoritative for any given bit of state, but at least consistency and eventual termination are taken care of.
[ETA the “deeper thing” you mention is on the money IMO. I’ve been following that line of thinking for a while now, and it’s been reasonably rewarding so far. I’m still cautiously optimistic about my own vaguely silverish bullets…]
So what would solve the problem? Decoupling those things from each other. There are multiple ways to do this, but one I suggested at the time, because I’ve had success with it, was to introduce some sort of message bus, and just have the Order adjust its own fields and then emit an “order cancelled” message. Then all the other components — shipments and payments and so on — can watch for that message and react appropriately.
How do you handle concurrency/liveness bugs if updates aren’t all synchronous?
Depends on whether it’s something that actually matters.
I’ve worked in highly-regulated fields where you just had to hold your nose and write a “god method” that knew about everything and did everything because the rules required that you do a dozen things all in one atomic operation.
I’ve also worked in fields where the requirements were a lot more lax. Sometimes it’s “just make sure things are consistent by the time a user will notice they aren’t”, for example, and that time window can be surprisingly large :)
More seriously, though, you don’t have to do a message bus as an external/aysnchronous thing – you can make events that are synchronous/blocking and run in the same process. Which lets you experiment and get some of the benefits of the decoupling without having to introduce the new problem set of going all-in event-driven microservice buzzword buzzword. Then you can gradually adopt fully async stuff if and when you feel a need for it.
And sort of going in another direction from my other reply, I think honestly it probably does good to get people thinking about distributed-system problems as early as possible, since very often we’re building and using distributed systems even if we don’t realize it – anything that has both a DB server and a cache, for example, or that has to talk to any sort of stateful external API.
Plus there are times when it’s nice to have the flexibility of how to handle failure. Running with the example from the post, very often if there’s a bunch of downstream stuff that has to happen any time you place or cancel or modify an Order, it’s better to have the ability to display a success message to the end user even if one of those downstream things failed, because that should be your problem to solve, not theirs (and in the synchronous all-or-nothing world it gets dumped out as their problem via an “Oops, something went wrong” error message). And of course you can also do patterns that let you notify the user of a problem happening downstream of what they thought of as their action, but being able to hide the internal implementation details from them is a nice option to have.
I think Brooks’ separation of software creation into activities that are “essential” or “accidental” is somewhat difficult to understand. I instead prefer to say that we focus almost entirely on the processes, practices and technologies impacting the non-functional requirements of a software endeavor. Success in better modeling a complex system or problem in terms of its functional requirements simply seems to be a too hard problem, so let’s spend all effort on technologies, languages, infrastructure and tools. Anything beyond (ever more decomposable) use cases or stories implemented using functional decomposition and data modeling is ever considered. Instead all attention is on that silver colored killer tech, language or paradigm that is the answer.
I think Brooks’ separation of software creation into activities that are “essential” or “accidental” is somewhat difficult to understand.
I’ve never found it particularly difficult to get, but I also come from the background of a degree in philosophy, so it may just be that I was already used to that sort of distinction expressed with exactly that terminology.
I think I may have another silvery hued bullet of my own. It’s disappointing really, but to my dismay I find myself able to apply it time and again: have a taste for simplicity. I know it sounds like “just fucking git gud”, but it’s more specific than that. Time and again, I see code that is so needlessly complex that I easily can (and sometimes even do) rewrite it and divide its size by 3 to 10.
I get that people are sometimes under pressure and need to do things quick. That’s not it. The code they wrote clearly took longer to write and test than the simplified version I can think of just looking at the API they implement (and that’s before I even question the API itself, there are some jarring mistakes there too).
Yet I’m pretty sure I am not some hot shot that’s so much better than everyone else. I’ve discussed with enough colleagues and interviewed enough juniors to disabuse me of that notion. Even the ones who committed the horrible code I’m complaining about are fairly smart. It’s something else.
Now “simplicity” is very vague, but there’s a more specific low-hanging fruit: modularity.
John Ousterhout speculates that problem decomposition is the single most important notion in all of computer programming. I basically agree, though I tend to think of it in terms of source code locality instead. Our modules should be deep, with small interfaces and significant functionality.
I have a sneaking suspicion that most of Brook’s colleagues actually had a taste for simplicity, and perhaps even most professionals of the time did. Casey Muratori once suggested that one reason is they didn’t have a choice, the machines they were working on were just too small and too slow to tolerate unnecessary complexity. Now that our machines have gigabytes of memory and even more effective operations per seconds we lost that feedback, and many of us failed to acquire the taste.
Hardware getting better allowed software to get worse in some ways. If there’s no silver bullet, here’s the next best thing: stop feeding the werewolf.
This claim is at odds with, well, basically the entire thesis of The Mythical Man-Month. Here’s Brooks in the original preface:
Stories like this are common in the literature and folklore of early computing. I’ve lost track of how many times I’ve heard stories of a wise senior programmer (who it is, and what the project was, tends to change with the telling) who allocated a buffer of memory at the start of a project, hid it, and then did nothing with it, because the team would then unknowingly be bound by a smaller memory budget, but one that could, importantly, be increased once they inevitably failed to stay within it. If everyone is scrupulously cherishing every byte and every cycle as a precious resource never to be wasted, the wise senior programmer would not need to resort to such tricks!
So in general and on the strength of available evidence, I think we must reject the nostalgic view that our predecessors were titans of frugal resource usage imposed on them by their machines; they were more or less the same as we are now, and many of the things they built were slow, resource-hungry, over-complex, and buggy just as many of the things we build today are. The main source of this nostalgic myth, I believe, is a form of survivorship bias: the things that have come down to us today to that we are encouraged to study and learn from are part of the small percentage of all software of that era which actually achieved goals of simplicity, frugality, etc., because nobody bothers to keep alive the study and enjoyment of the ones that didn’t.
Err… it feels like your two examples are actually making my case for me: even though Brooks was working on mainframes, those machines were much smaller than they are now, and the poor performance and too big a size were only noticed (and eventually corrected) because of this. Complexity on the other hand they may have had more room that if they were programming for a personal computer. Same thing for the “allocate a buffer then miraculously find memory before we ship the game” story, it’s about artificially constraining memory so even if the projects overflow the arbitrary limits it doesn’t overflow the real one.
As for being late, I don’t think the size of the machine matters at all. I can see projects slipping even on a tiny microcontroller with KiB of memory.
And of course, I don’t think for a minute the people from that era were intrinsically better than we are. But they did live in a different environment, where selected into their position from different criteria, and had a different education. They’ve got to be different. In what ways I don’t know, and here I was focusing on a single aspect of their environment, and how it might have influenced them.
Every generation of programmers looks back at previous generations and thinks “Wow, they must have really cared about performance to come up with those amazing techniques for working in such limited hardware!”
But the current generation of programmers also always thinks “Wow, I sure am glad we have the near limitless resources of this generation of hardware!” And every generation has people who insist that the bounties yielded by Moore’s Law, which were carefully and respectfully made use of by previous generations (who had to care because the hardware forced it on them) are now being wasted by the current generation of profligate developers who have lost the ethos of caring about performance.
In other words, every generation sees their iteration of hardware as a liberation from the straitjacket the previous generation had to program in. But the previous generation always saw their hardware as a liberation from the straitjacket the generation before them had to program in. And on and on, probably all the way back to ENIAC.
Every generation of software expands to fill the corresponding generation of hardware. Thus it has always been. Thus it always shall be. Every generation of programmers is, on average, average. They tend not to feel the limits of their hardware very much, because to them the current generation is always a big improvement over the last. The software that survives to be studied and recommended for study across multiple generations, however, is anything but average, and using it (or the programmers whose work it was) as representative of its era or of the ethos or job-selection requirements of its era, will lead you astray.
I understand selection bias, but it’s not just that. I have used computers since I was 10, which gave me roughly 30 years worth of memory. I’ve played quite a bit of time with my Dad’s Atari ST, then saw the rise of the IBM PC clones since Windows 95 came out… you get the idea. I remember compiling Gentoo packets on my Core2 Duo laptop, and it took ages, I remember boot times being quite a bit longer than they are since the advent of solid state drives, and of course I’ve seen the ungodly progress we’ve made in real time computer generated graphics.
At the same time though, many programs that gained comparatively little functionality, are just as slow now as they were 20 years ago. My phone slows down just because I update it. I’ve installed pretty much nothing, I regularly wipe old data, but no, memory gets tighter and tighter, applications laggier and laggier… even going back to the home screen, that used to be instantaneous, now often takes more than 10 seconds.
I would somewhat understand if programs went slower and more wasteful as hardware gets faster and bigger. It’s a reasonable trade-off to make, to a point. My problem is when that slowdown outpaces the unreal speed at which hardware improved over the last 30 years I could personally observe. I believe that in many cases, we are way past the point where wasting more computational resources helps us deliver a product faster.
We started out with this:
But there never was a time when this was the case. In every era, programmers are limited by the hardware they work with, but in every era they approach it not as “I must be frugal and responsible, since I have only a few kilobytes of memory”, but as “Wow, I have kilobytes of memory? I can do so much more than before!”
I’ve mentioned this example before, I think to you specifically, but: I like to shop in a local Japanese market. Yet I don’t speak or read Japanese. 20 years ago I’d have been out of luck. Today, I can pull my phone out of my pocket, point it at a label or a package, and it does live text recognition and live translation of that text. I literally am blown away every single time. The devices I have today can do so much more than the ones I had twenty years ago that I simply cannot for the life of me fathom the stance that they’ve gained “comparatively little functionality”.
Back in the day “updates” came on a stack of floppy disks and most people didn’t bother with them. Which is one way to avoid problems from updates!
And then when we started getting updates downloadable over the internet, people started yelling at companies to extend support back further and further, claiming that anything else is “planned obsolescence”. Meanwhile, companies that make software don’t want to be frozen to the capabilities of the oldest supported hardware. So they kludge it, a time-honored tradition reflected in just how old that bit of jargon is, and the result is stuff that runs slower on old hardware.
But the idea that we are in some sort of era where people are uniquely wasteful of hardware resources, or uniquely uncaring about performance, is just completely factually wrong. The average programmer and the average software of 10, 20, 30, 40 years ago were not noticeably better or more frugal or more careful of performance relative to the hardware they had than programmers and software today.
To sum up my thesis:
The actual mindset they had matters much less than what the computers allowed them to do.
I remain unconvinced. The only indisputable fact I see right now is how much more powerful our computers are. We’ve gone from KHz to MHz in a couple decades, that’s 6 orders of magnitude. Such a difference in degree, even gradual, is bound to introduce differences in kind.
I also never pretended older programmers cared more about performance (and, for the KiB computers and less, simplicity). Like everyone else they likely cared first and foremost about making it work. But when your computer is small enough or slow enough, the more wasteful approaches we can afford today simply did not work. The minimum performance bar was just higher, and they met it because they simply had no choice.
Where I disagree with you is that I don’t think there ever was any sort of conscious/deliberate “skill” of carefully and frugally making use of limited resources. There was just software that didn’t do as much as today.
Like, there’s a reason why all those classic games have super-simple low-res graphics and tiny color palettes and had to have extremely simple movement options, etc. And it wasn’t because the programmers who made them had lost some skill of getting more out of the hardware.
The “minimum performance bar” was not higher relative to the hardware of the era. Every era has had some software that was fast for the time and some that was slow for the time and a lot that was just average.
And relative to what the hardware of the time was capable of (which is why I emphasized that in my last comment) lots of software of previous eras really was slow and bloated. Really. Yes, really. There was no Apollo-13-style “poor performance is not an option” stuff going on. There was lots of really awful crappy terrible slow software being written. Tons of memory and tons of CPU cycles wasted.
Please, stop promoting the myth that there ever was anything more to it than that.
I think the point here is that there are relatively “simple” programs which were feature complete (in the eye of the beholder). Programs with similar functionality are nowadays as slow or slower than those programs in the early days. That makes no intuitive sense to the user - if the same old program would be run today, it would be super fast.
It would make logical sense that a newly built program which does the exact same thing nowadays would be much faster, except that’s not typically the case. Newer programming environments offer conveniences to the programmer which are invisible to the user, but do have an additional performance impact.
For example, if the “fast” program back in the day was hand-written in assembly or C, the same program nowadays might be written in Java or Python or what have you. Nobody in their right mind would hand-write large programs in assembly if they have the choice. C is also quickly falling out of fashion.
As another example, a DOS program had the CPU all to itself. A similar program running under Windows or even Linux would have the OS and background processes to contend with, so it would necessarily feel slower.
Does it make sense? On the whole, I am not sure. We (think we) need more and more software, so it does makes sense that we’re able to produce it faster and more efficiently. What user really wants to go back to running everything single-tasking in DOS? What programmer really wants to go back to writing everything by hand in ASM/C (including device drivers)?
I did not mean to say it was conscious or deliberate.
Mostly, yes. Computer games especially, with how they compete for triangles and pixels. But we also have software that doesn’t do much more today than it did 20 years ago or so, and somehow manages to not be any faster. It starts being seriously dated, but Jonathan Blow’s Photoshop example was striking.
A likely important factor is how competitive a given sector is. Games are highly competitive, and an excellent game that lags will be played less than an excellent game that does not. At the other end of the spectrum I suspect Photoshop is almost monopolistic, with a huge captive audience they’d need to seriously piss of before they all move to Gimp or Krita.
The best-selling video game of all time, notorious for the sheer amount of time its players sink into it, is also infamous for its terrible performance, to such a degree that many guides recommend, as one of the first things you do, installing a mod pack whose sole purpose is to try to make the performance into something reasonable.
That game is Minecraft.
And it is not alone, nor anywhere near; the games industry is well known for shipping things that are broken, buggy, slow, resource-hogging and/or all of the above. So much so that it’s spawned endless memes.
And I’m fairly certain this has all been pointed out to you in prior iterations of this debate.
I’m aware that gameplay, time of publishing, and marketing, affect game sales. I reckon they do significantly reduce the selection pressure on criteria such as graphics quality, loading times, input lag, and frame rate.
They do not eliminate that pressure though. Not as effectively as a near-monopoly or vendor locking would.
Your argument would require that we somehow see a higher standard of performance coming from game developers.
The empirical reality is we don’t. Game dev is not some special realm of Performance-Carers. Nor were programmers of previous eras particularly concerned – relative to the capabilities of their hardware, they wrote plenty of software that was as slow as the things you complain about today.
You really, really need to learn to accept this, ditch the mythologizing, and move on.
You keep painting my comments as if I was saying some groups of people were more virtuous than others. I keep insisting that different groups of people are subject to different external constraints.
Let’s try an analogy with cars. If fuel prices shoot through the roof people will quickly start to pay real close attention to fuel efficiency before buying a new car, creating a market pressure that will force manufacturers to either produce more efficient cars, go out of business… or form a cartel.
Because the underlying message always is that certain groups “care” about “performance”. This isn’t the first time we’ve gone round and round on this.
And again, the simple empirical fact is that in every era there are people like you who complain about a lost art of performance and simplicity. What looks to us, now, like constraints that developers of the past must have had to come up with explicit strategies for, were to them at the time rarely perceived as constraints at all, because they felt liberated from how constrained the hardware of their recent past was.
For like the fifth time now across these threads: the empirical reality of the game dev industry is that they do not seem to feel performance is a constraint upon them. Games with horrid performance and resource usage are put out all the time and succeed in the market. Minecraft, which did so long before it was bought out, is a great example of this and also of the fact that no “cartel” is necessary to artificially protect games that have poor performance.
Not my message. Perhaps initially, but I got your point. Please don’t put words in my mouth.
Of course not. I wouldn’t dare even suggest such a thing.
That really depends on the game.
Of course, you could come up with 10 times as many examples where performance matters so little you’d go to have out of your way to make it lag (typical of most skinner boxes for palmtops). There’s no unified “game industry”, just like I’m increasingly realising there’s no “embedded industry”: it’s a collection of sub-niches, each with their own constraints.
Still, a significant number of those niches absolutely feel performance constraints.
The fact that you’re not even able to talk about this without needing to resort to loaded/judgmental language is a pretty big deal. Oh, see, the True Game Devs™ really do feel the constraint and really do care about performance – it’s those developers of “palmtop” “skinnerboxes” who are being inappropriately held up as counterexamples.
Meanwhile the catastrophic-failure-of-the-week in game dev is Cities: Skylines II, which was apparently close to functionally unplayable on launch day due to abysmal performance. That’s not some rando mobile gacha “skinnerbox”, that’s a big-time game from a big-time publisher.
It really is time to just let it go and admit that in every era and in every field of programming the average programmer is average, and there was no special era and is no special field in which the Performance-Carers were or are uniquely clustered.
Photoshop is dealing with constraints that a game (even an AAA game) doesn’t have - like backwards compatibility, and safeguarding the user’s input against data loss and corruption. It’s not a complete explanation for PS’ perceived performance issues, but stuffing everything into the framework of “how fast can you sling stuff onto the screen” is one-dimensional at best.
Still, I don’t see the link between that, and the nearly 100-fold increase in boot times.
Backward compatibility doesn’t explain anything here: we’re loading a modern image format, not a Photoshop work file. And sure this image has to be turned into a Photoshop specific internal representation, but that internal representation doesn’t need to follow any kind of backward compatibility. That’s a problem when (auto) saving the works, not for loading it from a JPEG. Likewise, what data loss and corruption can happen during boot?
Having a lot of features is not a reason to be slow. I can perhaps forgive video games for their even slower boot times (The Witness, Factorio), but those load much more than code to memory, they also have a ton of assets they chose to load right away instead of streaming them during the game. What’s the Photoshop equivalent?
That’s a question for many programs by the way: how much data does a program need to load to memory before it can reach an operational state? My guess is, if it takes over 10 times the equivalent
fread(3)
call, there’s a huge margin for improvement.I was recently playing an AAA game which had horrendous load times on my platform (Xbox One), so much so that I usually started the load, then did stuff like emptying the dishwasher while it reached the state where I could start playing. Once it was loaded though, the experience was fast and bug-free.
I’m not a heavy user of Adobe’s products, but it’s possible that their users are fine with a long start-up time as long as the flow within the program is smooth and non-laggy. You start up the application in the morning while fixing coffee, then work through the day.
Strange, I’ve heard game consoles usually have certification processes that are supposed to limit load times. Seems I’m not up to date on that.
Except in Blows demonstration in 2017, it was not. The main drop-down menu took 1 full second to appear, and not just the first time, so whatever it did wasn’t even cached. It’s but one example, I expect other parts of Photoshop were snappy and reactive, but if something as basic as the main menu lagged so much we’re not off to a good start.
I think this is one of the places where I lean towards a stronger form of the weak linguistic relativity hypothesis for programming languages. I spent a lot of time writing Objective-C and I found that, in that language, 90% of what I wrote ended up being things that could be pulled into different projects or repackaged in libraries. This is not true of any of the code that I wrote before learning Objective-C and, especially, the OpenStep libraries, in spite of the fact that Objective-C was something like the 10th language that I learned. Having started written code in that style, I tend to bring the some of same concepts (modularity, loose design-time coupling) to other languages.
It’s worth noting that, at around the same time as inventing Objective-C, Brad Cox wrote an essay (that I seem unable to find today) with the title ‘What if there is a silver bullet and the competition gets it first?’ where he strongly advocated for modularity as a key design idea. He proposed designing two categories of languages:
His idea for Objective-C was as a language for packaging C libraries up into a way that Smalltalk-like languages could use them easily. These days, a lot of interesting projects are using Rust as the language for writing components and Python as the language for integrating them.
I think there is a case for figuring out the right amount of breaking-things-down, but I also think we are nowhere close to figuring out what that right amount is or how to go about it. And systems and languages that prize “modularity” seem to turn into intractable messes of abstraction more often than they turn into paragons of easy-to-understand simplicity.
The complexity is usually imposed by the customer. I can’t count the number of times I had a great idea for speeding up and simplifying some code, only to discover some pesky requirement was cutting off that particular approach.
That’s why code design should be a two-way communication between the business and the implementor. There are cases where the business side over-specify something for no good reason, making efficient implementation impossible. But of course, there are legitimate cases where it’s just a hard requirement.
There’ve been many cases of a “must have” hard requirement which we implemented and then it turned out nobody used it. Customers often don’t know what they need either. Or sometimes a feature is about prestige, like the boss’s spouse would love this so there’s no way it’s going to be axed regardless of how little sense it makes.
Yes! A thousand times this!
The primary barrier to writing large scale software is complexity. I see developers focus on everything else, like efficiency and local readability and local robustness, which, yes, are important too. But not focusing on the interfaces between things, or even seeming not to really grok what an interface between components is, so that the user of any component needs to understand the details of that component to use it. And not realizing that there’s a dozen ways of writing any chunk of functionality, and some will be less than half the size and complexity of others. And then there’s 100,000 lines of code, and, lo and behold, it doesn’t work quite right in the edge cases, and it takes days to debug things.
This week famous YouTuber Hank Green talked about his bought with cancer (link in the doobldedoo). He discussed how people are frustrated that we haven’t cured cancer; how cancer isn’t one thing, but actually many distinct things; and how we have actually made progress on individual types of cancer. He says we likely won’t ever cure “cancer” because cancer is a category of things, not a single thing.
Similarly, building software isn’t a singular activity. It’s actually a lot of different activities. Finding a silver bullet that improves the efficiency of all programming endeavors 10x is either very difficult or near impossible. Rather than chasing efficiency across the board, we likely need to focus on particular domains in programming. To that end, I was happy that @ubernostrum mentioned tools like Ruby on Rails and Django.
Ultimately, I am reminded of Proebsting’s Law:
If you want 10x improvements, you will need to analyze your domain. Otherwise, the improvements seem to come fairly slowly.
Whenever talk of silver bullets comes up, I feel like people discuss it as though most people were operating in a Pareto optimal fashion. Rather, we are all probably operating at some level of inefficiency. To me that means that the focus should be shifted towards distributing previously realized gains. Rather than thinking about a technology that would 10x someone operating at optimal efficiency, we should instead look towards people operating far below the optimal curve and bring them to closer to the edge. This means better education, better documentation, better tooling.
But just like I said in the first section, it’s easy to talk in a monolithic abstract, but we’re really talking about many subdomains. Nothing is ever easy.
Consider this passage from the original essay:
One activity that can result in a high degree of inefficiency is communication. The number of two way communication pathways in a group is n^2. This strikes me as an important source of accidental complexity in large projects. Crucially, this is not technological, but organizational. This provides important context to many engineer’s career paths. Staff+ engineers spend more and more time communicating because this is where the real gains are in large organizations.
I have started thinking more about knowledge as a graph. Communication on topics that require a few node jumps in the graph tend to be more difficult. Discussing a topic one node away can lead to breakthroughs. Ensuring a team has context on the relevant nodes in the graph can lead to fruitful conversations. Failing to do so leads to misunderstandings and frustration.
Unfortunately, accurately judging peoples’ “knowledge graphs” is very difficult. YMMV
To wrap up a long post:
It’s worth noting (and directly supports your point) that this is a decidedly non-linear process. For example, autovectorisation with SSE gave a factor of four speed up for some code, no benefit at all for a lot of things, and a slowdown for a few things. Vectorisation has improved quite a lot, but the best-case speed up is still a factor of four (sometimes a bit more from reduced register pressure) for 128-bit vectors with 32-bit scalars. Similarly, improvements in alias analysis allow kinds of code motion that have a big impact on some code structures and no benefit on others.
The message bus feels like an implementation detail of a deeper thing, which is observables. Just like MVC is what happens when you insist that you should be able to add and remove and adjust different views without having to teach every other view about the changes, I feel like the author is asking for the same decoupling where I can hook into activity in my system.
But then you run into 1) consistency/transactions and 2) how do you avoid wiring up loops in your system of observables that can send your system into oscillating or exploding states.
Another way of going after this is in the relational model. Like observables in MVC decouple changes in views from each other, relations decouple aspects of entities from each other. I can in theory write triggers on one table that cause these kind of secondary effects. This can solve the consistency/transaction issue (though it introduces all kinds of fun new problems of teams clobbering each other), but doesn’t really help with oscillation.
Re: 2, one answer is to focus on the goal of achieving state synchronization rather than transmitting state change. Doing so yields eventually-consistent protocols as a corollary. You still have to be careful which parties in your system are authoritative for any given bit of state, but at least consistency and eventual termination are taken care of.
[ETA the “deeper thing” you mention is on the money IMO. I’ve been following that line of thinking for a while now, and it’s been reasonably rewarding so far. I’m still cautiously optimistic about my own vaguely silverish bullets…]
How do you handle concurrency/liveness bugs if updates aren’t all synchronous?
Depends on whether it’s something that actually matters.
I’ve worked in highly-regulated fields where you just had to hold your nose and write a “god method” that knew about everything and did everything because the rules required that you do a dozen things all in one atomic operation.
I’ve also worked in fields where the requirements were a lot more lax. Sometimes it’s “just make sure things are consistent by the time a user will notice they aren’t”, for example, and that time window can be surprisingly large :)
More seriously, though, you don’t have to do a message bus as an external/aysnchronous thing – you can make events that are synchronous/blocking and run in the same process. Which lets you experiment and get some of the benefits of the decoupling without having to introduce the new problem set of going all-in event-driven microservice buzzword buzzword. Then you can gradually adopt fully async stuff if and when you feel a need for it.
And sort of going in another direction from my other reply, I think honestly it probably does good to get people thinking about distributed-system problems as early as possible, since very often we’re building and using distributed systems even if we don’t realize it – anything that has both a DB server and a cache, for example, or that has to talk to any sort of stateful external API.
Plus there are times when it’s nice to have the flexibility of how to handle failure. Running with the example from the post, very often if there’s a bunch of downstream stuff that has to happen any time you place or cancel or modify an
Order
, it’s better to have the ability to display a success message to the end user even if one of those downstream things failed, because that should be your problem to solve, not theirs (and in the synchronous all-or-nothing world it gets dumped out as their problem via an “Oops, something went wrong” error message). And of course you can also do patterns that let you notify the user of a problem happening downstream of what they thought of as their action, but being able to hide the internal implementation details from them is a nice option to have.I think Brooks’ separation of software creation into activities that are “essential” or “accidental” is somewhat difficult to understand. I instead prefer to say that we focus almost entirely on the processes, practices and technologies impacting the non-functional requirements of a software endeavor. Success in better modeling a complex system or problem in terms of its functional requirements simply seems to be a too hard problem, so let’s spend all effort on technologies, languages, infrastructure and tools. Anything beyond (ever more decomposable) use cases or stories implemented using functional decomposition and data modeling is ever considered. Instead all attention is on that silver colored killer tech, language or paradigm that is the answer.
I’ve never found it particularly difficult to get, but I also come from the background of a degree in philosophy, so it may just be that I was already used to that sort of distinction expressed with exactly that terminology.