One gripe I have with how we talk about “no silver bullet”:
There is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity.
That’s doesn’t happen in any field. When order-of-magnitude improvements happen it’s due to a bunch of different technological innovations that were all in progress for a long time, often interplaying with each other. Even Brooks himself admits this!
A disciplined, consistent effort to develop, propagate, and exploit them should indeed yield an order-of-magnitude improvement. There is no royal road, but there is a road.
I forgot about that second part, thanks for the reminder.
I also have an analogy of my own: while we may not have a silver bullet, we can most probably gather enough silver dust to forge one bullet or three. Yes, I think that up to three orders of magnitude improvement, compared to current practices, is attainable at least on principle. For instance:
The STEPS project demonstrated the ability to make a home computing system in 20K lines of code, including compilation toolchain, rendering & compositing, desktop publishing, network communication, drawing, spreadsheets… all while traditional systems (GNU/Linux, Windows, MacOS, and their associated software) approach or exceed 200 million lines of code to do the same, or 4 orders of magnitude more. Now STEPS does not include the kernel, but…
Casey Muratori pointed out in The Thirty-Million-Lines Problem that while kernels currently (well, in 2015) weighted around 15 million lines (but a fraction of the the aforementioned 200M lines), if hardware vendors got their act together (or I would add were forced by regulation) and agreed on reasonable standard interfaces (ISA) for their hardware and published the damn specs (all those webcams, wifi modules, USB thingies, and of course those fucking graphics cards), we could go back to have small kernels that would only require 20K lines of code or so.
Casey Muratori also pointed out that performance really got the short end of a stick, with programs who routinely runs 3 orders of magnitude slower than they could, to the point that users can actually notice it, despite the ludicrous performance of our computers (and I would include the weakest smartphone produced in 2023 in that list). Not only that, but there are low hanging fruits we could pick if we knew how. (A note on refterm being 3 orders of magnitude faster than the Windows terminal: a Windows terminal contributor confirmed to me that those speeds are achievable by the Windows terminal itself, it’s just a ton of work, and they’re working towards approaching it.)
Back before my day the Oberon system, by Niklaus Wirth (Pascal/Modula/Oberon languages inventor), was used at his university to do actual secretary and research work for a few years, and the entirety of the OS required no more than 10K lines of code. With a language arguably much less expressive than the STEPS’ languages, on a computer orders of magnitude weaker. This does include the kernel, and the hardware description itself takes up about 1K lines of Verilog.
@jerodsanto, instead of repeating yet again that there is no silver bullet in a way that is most likely to have us abandon all hope, I believe it would help more to highlight that massive improvements, while far from free, are absolutely possible.
Casey Muratori pointed out in The Thirty-Million-Lines Problem that while kernels currently (well, in 2015) weighted around 15 million lines (but a fraction of the the aforementioned 200M lines), if hardware vendors got their act together (or I would add were forced by regulation) and agreed on reasonable standard interfaces (ISA) for their hardware and published the damn specs (all those webcams, wifi modules, USB thingies, and of course those fucking graphics cards), we could go back to have small kernels that would only require 20K lines of code or so.
That presupposes that an operating system kernel needs to know everything about the computer’s hardware and serve as an intermediate for all communication with it. That isn’t true, the kernel could limit itself to scheduling and orchestrating access. Then the diverse hardware could be handled by similarly diverse set of separate daemons.
That’s a better approach than making a monolith that has to handle everything and then complaining that everything is too much.
I reckon your suggestion has benefits. Done well I expect it will increase reliability and security, probably without even sacrificing performance : computers are so fast nowadays that having components be inches apart makes them distributed systems to begin with.
But it wouldn’t really help the point I was making unfortunately. We need those drivers one way or another. To send you this comment I need at some point to talk to my Wi-Fi module on my laptop. If there are a gazillion different Wi-Fi modules out there we’ll collectively need a gazillion drivers. So what happens in practice is that hardware vendors, instead of giving us the specs, write a (proprietary) driver for the 1 most popular OS (or 2 if we’re lucky), and then let the other OSes to fend for themselves. With a standard ISA for all Wi-Fi modules, we could have one driver per OS, done.
Casey Muratori pointed out in The Thirty-Million-Lines Problem that while kernels currently (well, in 2015) weighted around 15 million lines (but a fraction of the the aforementioned 200M lines), if hardware vendors got their act together (or I would add were forced by regulation) and agreed on reasonable standard interfaces (ISA) for their hardware and published the damn specs (all those webcams, wifi modules, USB thingies, and of course those fucking graphics cards), we could go back to have small kernels that would only require 20K lines of code or so.
They’re not going to. They are simply never going to do that. Take it as a constraint and move on.
More to the point, this reeks of “We could have a Good, Moral, Simple System if All You Zombies acted right and accepted something that was easier for us to code!” The most successful variant of that basic philosophy was the original 128K Macintosh, and even Apple eventually had to bow to the real world and add enough functionality to keep the IBM PC and clones from steamrollering them utterly.
There’s room to simplify and improve, but I’m allergic to being strait-jacketed into someone else’s idea of a Moral System just so an implementer can brag about how few lines of code they needed.
They’re not going to. They are simply never going to do that. Take it as a constraint and move on.
I guess they’re not. Though One argument Casey put there here was that it might actually benefit the first hardware company who does that a great deal. One reason they don’t is because it’s different from the status quo, and different is risky.
There’s room to simplify and improve, but I’m allergic to being strait-jacketed into someone else’s idea of a Moral System just so an implementer can brag about how few lines of code they needed.
Not sure exactly what straightjacket you’re talking about, and how moral it really is. ISAs aren’t all that constraining, they didn’t stop Intel and AMD from updating their processors (they do pay the x86 decoding tax though). Also, the problem is less the complexity of any given ISA, and more the fact that there are so many different ISAs out there. A big driver towards the near insurmountable amount of code in kernels is that you need so many different drivers.
There is one thing however that I would have no qualm writing into law: when we sell computer or electronic hardware, all interfaces must be documented. The entire ISA, all the ways we might modify the operations of the device, everything one might need to use the hardware, up to and including writing a state of the art driver. No specs, no sell. Hardware vendors do write their own drivers, it’s not like they don’t already have their internal specs.
Hardware vendors are occasionally forced by regulation to agree on standard interfaces. It’s wrong to assume that corporations always get their way and are never democratically regulated.
I agree that we’re leaving tons of performance on the table, but I don’t agree with the implication that it’s some kind of moral failing. People optimize towards a goal. When they reach it, they stop optimizing. A purpose built system will always be simple and faster than a general purpose system, but general purpose systems can solve problems not anticipated by people higher up the stack. It’s all a trade off. As Moore’s Law dies off, we’ll see people start optimizing again because they can’t just wait around until hardware gets better, and we know the kinds of things we’re building towards now, so we can be a little less general and a little more purpose built.
Actually I’ll do more than imply the moral failing, I’ll outright state it: it is at some point a moral failing, especially when we have a non-trivial number of users, who would like to have less drain on their battery, or would like to wait less time on their software, or would like to suffer fewer bugs and inconveniences. What we might see as wasting a few seconds a day accumulate to much, much more when we have many users.
That, and more performance would allow us to get away with weaker, more economical, more durable, more ecological hardware. The computer and electronics industry is one of the most polluting out there, reducing that pollution (for instance by not expecting users to update their hardware all the time) would be nice.
We might be trading off other things for that performance, but to be honest even there I’m not sure. Writing reasonably performant software doesn’t require much more effort, if at all, than writing crap. I see it every day on the job, we can barely achieve simplicity. If we did that everything would be easier in the long run, including performance.
Actually I’ll do more than imply the moral failing, I’ll outright state it
Then perhaps Casey (and you) could direct some ire at the field of game dev. I know Casey and his fans like to promote game dev as the one true field where We Care About Performance™, but as I always point out that’s just not the truth.
And to prove it’s not the truth, I generally just go over to reddit, pull whatever the latest big game release is, and look at a few threads. Apparently right now it’s The Lord of the Rings: Gollum, which benchmarks atrociously even on ultra-high-end gaming hardware, and based on comment threads isn’t really any better on console (which is where the motte-and-bailey argument about game dev often retreats to), suffering degraded graphics in order to keep up any semblance of performance. One thread I glanced at included the savage comment “The Lord of the Rings books had better graphics”.
None of the people involved in pumping out these games seem to think that performance is a moral imperative, or that it’s morally wrong to waste cycles or memory. Yet so often it’s all the rest of us who are supposed to take lessons from them.
So I’ll outright state it: if I took to heart the way game dev does performance, I’d get fired. Casey and friends should spend some time on the beam in their own eyes before complaining about the mote in mine.
I know Casey and his fans like to promote game dev as the one true field where We Care About Performance™
His fans perhaps. Casey himself I actually don’t know. He happens to work in the game industry, but he’s on the record about being in it for the technical challenges, not really the game themselves. And if we’re going to scold him about not doing what he’s preaching, we should take a look at what he actually did: the Granny animation system, whatever he did for The Witness… Similarly, I would look at what Mike Acton’s participated in when he worked at Insomniac Games, or what he managed to contribute for Unity.
About games wasting way too many cycles, I know the list is long. Of the top of my head, I personally played Supreme Commander, who I think demanded way more from the hardware than it had any right to, and Elite Dangerous (in VR), whose Odyssey update had unplayable performance for a while, and is still reputedly quite crap.
I would also like to know why Factorio and The Witness take so long to boot. Factorio devs cared a huge deal about performance (and improved it by a couple orders of magnitude over the years), and Jonathan Blow ranted in 2017 about Photoshop boot times, so I guess there must be some explanation? Especially from Blow: I’d like to know why his boot times are somehow acceptable, when Photoshop’s are not.
No sources on me rn, but iirc computer hardware is actually low on the pollution totem pole. Things like concrete manufacturing, steel manufacturing, agriculture, and cars swamp everything else.
I can concede that. Computer stuff being really polluting is something I’ve heard I don’t remember where, and trusted then. Maybe I shouldn’t have. I also recall some graphics and numbers on greenhouse gas emissions (from Jean Marc Jancovici), and for these no single sector seemed to dominate any other: we need to reduce everything all around.
I do suspect however the mindset responsible for computer related waste is also responsible for much of the waste pretty much everywhere else as well.
It’s such a high bar. “Oh, this tool only doubles your productivity? Thats not an order of magnitude so not a silver bullet”
People really miss the point of the paper. Brooks’ argument was that programmers could make an order of magnitude difference and tools wouldn’t. People joke about 10x programmers but then quote the tool part of the paper.
you have to understand, at the time the book was written, science fiction and fantasy fandom was not yet mainstream, and realist, detail-oriented taxonomy of every fictional concept was not as far along. it feels silly saying it that way but … I get the feeling it might be hard to imagine what the world was like prior to the cultural victory of a certain segment of fandom, so I’m doing my best to describe what has changed.
this is all by way of saying that it wasn’t intended to be a fully worked-out metaphor. silver bullets are a thing that sounds cool. nobody cared that it’s specifically werewolves they’re usually discussed in relation to. that’s all there was to it.
and yes, software complexity was the obstacle being discussed
Of all the monsters who fill the nightmares of our folklore, none terrify more than
werewolves, because they transform unexpectedly from the familiar into horrors. For
these, we seek bullets of silver than can magically lay them to rest.
The familiar software project has something of this character (at least as seen by the
non-technical manager), usually innocent and straightforward, but capable of becoming a
monster of missed schedules, blown budgets, and flawed products. So we hear desperate
cries for a silver bullet, something to make software costs drop as rapidly as computer
hardware costs do.
Ah, so I was confused because I hadn’t read the original paper being commented and was only replying to @puffnfresh’s blog post(‘s URL). Serves me right, but I apologize for wasting you all’s time with my confusion!
No, but there is significant advances when you don’t need to control everything.
Eg. rails vs flask has to be a 10x difference in productivity if you can adopt & abide by rails conventions.
Rails isn’t a silver bullet in the extreme of the term, but it is a 10x improvement within a large portion of applications (mostly form based, working within average performance requirements, data model is relational).
If you’ll allow me to flail around a bit: I believe that given the right kind of coding style, software development can be a form of Curry-Howard correspondence, where the coder’s understanding of the problem domain, reflected in the code, provides a rigorous, testable definition of that domain.
This means that programming can be a form of applied philosophy. Begin with people bullshitting, transition to semi-formal colloquial phrasing, then eventually complete the transition to a testable and reproducible form of domain exploration, ie, science.
Because of a lot of things, but especially the social-cognitive-performative nature of language, we’re never going to have a system of creating new sciences. If that were ever to happen, new forms of science discovery would be effectively closed off to humans. Such a tron-like world doesn’t track with me.
tl;dr - round hole, meet square peg. Qualities of hammers are not important in this discussion.
One gripe I have with how we talk about “no silver bullet”:
That’s doesn’t happen in any field. When order-of-magnitude improvements happen it’s due to a bunch of different technological innovations that were all in progress for a long time, often interplaying with each other. Even Brooks himself admits this!
I forgot about that second part, thanks for the reminder.
I also have an analogy of my own: while we may not have a silver bullet, we can most probably gather enough silver dust to forge one bullet or three. Yes, I think that up to three orders of magnitude improvement, compared to current practices, is attainable at least on principle. For instance:
The STEPS project demonstrated the ability to make a home computing system in 20K lines of code, including compilation toolchain, rendering & compositing, desktop publishing, network communication, drawing, spreadsheets… all while traditional systems (GNU/Linux, Windows, MacOS, and their associated software) approach or exceed 200 million lines of code to do the same, or 4 orders of magnitude more. Now STEPS does not include the kernel, but…
Casey Muratori pointed out in The Thirty-Million-Lines Problem that while kernels currently (well, in 2015) weighted around 15 million lines (but a fraction of the the aforementioned 200M lines), if hardware vendors got their act together (or I would add were forced by regulation) and agreed on reasonable standard interfaces (ISA) for their hardware and published the damn specs (all those webcams, wifi modules, USB thingies, and of course those fucking graphics cards), we could go back to have small kernels that would only require 20K lines of code or so.
Casey Muratori also pointed out that performance really got the short end of a stick, with programs who routinely runs 3 orders of magnitude slower than they could, to the point that users can actually notice it, despite the ludicrous performance of our computers (and I would include the weakest smartphone produced in 2023 in that list). Not only that, but there are low hanging fruits we could pick if we knew how. (A note on refterm being 3 orders of magnitude faster than the Windows terminal: a Windows terminal contributor confirmed to me that those speeds are achievable by the Windows terminal itself, it’s just a ton of work, and they’re working towards approaching it.)
Back before my day the Oberon system, by Niklaus Wirth (Pascal/Modula/Oberon languages inventor), was used at his university to do actual secretary and research work for a few years, and the entirety of the OS required no more than 10K lines of code. With a language arguably much less expressive than the STEPS’ languages, on a computer orders of magnitude weaker. This does include the kernel, and the hardware description itself takes up about 1K lines of Verilog.
@jerodsanto, instead of repeating yet again that there is no silver bullet in a way that is most likely to have us abandon all hope, I believe it would help more to highlight that massive improvements, while far from free, are absolutely possible.
That presupposes that an operating system kernel needs to know everything about the computer’s hardware and serve as an intermediate for all communication with it. That isn’t true, the kernel could limit itself to scheduling and orchestrating access. Then the diverse hardware could be handled by similarly diverse set of separate daemons.
That’s a better approach than making a monolith that has to handle everything and then complaining that everything is too much.
I reckon your suggestion has benefits. Done well I expect it will increase reliability and security, probably without even sacrificing performance : computers are so fast nowadays that having components be inches apart makes them distributed systems to begin with.
But it wouldn’t really help the point I was making unfortunately. We need those drivers one way or another. To send you this comment I need at some point to talk to my Wi-Fi module on my laptop. If there are a gazillion different Wi-Fi modules out there we’ll collectively need a gazillion drivers. So what happens in practice is that hardware vendors, instead of giving us the specs, write a (proprietary) driver for the 1 most popular OS (or 2 if we’re lucky), and then let the other OSes to fend for themselves. With a standard ISA for all Wi-Fi modules, we could have one driver per OS, done.
They’re not going to. They are simply never going to do that. Take it as a constraint and move on.
More to the point, this reeks of “We could have a Good, Moral, Simple System if All You Zombies acted right and accepted something that was easier for us to code!” The most successful variant of that basic philosophy was the original 128K Macintosh, and even Apple eventually had to bow to the real world and add enough functionality to keep the IBM PC and clones from steamrollering them utterly.
There’s room to simplify and improve, but I’m allergic to being strait-jacketed into someone else’s idea of a Moral System just so an implementer can brag about how few lines of code they needed.
I guess they’re not. Though One argument Casey put there here was that it might actually benefit the first hardware company who does that a great deal. One reason they don’t is because it’s different from the status quo, and different is risky.
Not sure exactly what straightjacket you’re talking about, and how moral it really is. ISAs aren’t all that constraining, they didn’t stop Intel and AMD from updating their processors (they do pay the x86 decoding tax though). Also, the problem is less the complexity of any given ISA, and more the fact that there are so many different ISAs out there. A big driver towards the near insurmountable amount of code in kernels is that you need so many different drivers.
There is one thing however that I would have no qualm writing into law: when we sell computer or electronic hardware, all interfaces must be documented. The entire ISA, all the ways we might modify the operations of the device, everything one might need to use the hardware, up to and including writing a state of the art driver. No specs, no sell. Hardware vendors do write their own drivers, it’s not like they don’t already have their internal specs.
Hardware vendors are occasionally forced by regulation to agree on standard interfaces. It’s wrong to assume that corporations always get their way and are never democratically regulated.
I agree that we’re leaving tons of performance on the table, but I don’t agree with the implication that it’s some kind of moral failing. People optimize towards a goal. When they reach it, they stop optimizing. A purpose built system will always be simple and faster than a general purpose system, but general purpose systems can solve problems not anticipated by people higher up the stack. It’s all a trade off. As Moore’s Law dies off, we’ll see people start optimizing again because they can’t just wait around until hardware gets better, and we know the kinds of things we’re building towards now, so we can be a little less general and a little more purpose built.
Actually I’ll do more than imply the moral failing, I’ll outright state it: it is at some point a moral failing, especially when we have a non-trivial number of users, who would like to have less drain on their battery, or would like to wait less time on their software, or would like to suffer fewer bugs and inconveniences. What we might see as wasting a few seconds a day accumulate to much, much more when we have many users.
That, and more performance would allow us to get away with weaker, more economical, more durable, more ecological hardware. The computer and electronics industry is one of the most polluting out there, reducing that pollution (for instance by not expecting users to update their hardware all the time) would be nice.
We might be trading off other things for that performance, but to be honest even there I’m not sure. Writing reasonably performant software doesn’t require much more effort, if at all, than writing crap. I see it every day on the job, we can barely achieve simplicity. If we did that everything would be easier in the long run, including performance.
Then perhaps Casey (and you) could direct some ire at the field of game dev. I know Casey and his fans like to promote game dev as the one true field where We Care About Performance™, but as I always point out that’s just not the truth.
And to prove it’s not the truth, I generally just go over to reddit, pull whatever the latest big game release is, and look at a few threads. Apparently right now it’s The Lord of the Rings: Gollum, which benchmarks atrociously even on ultra-high-end gaming hardware, and based on comment threads isn’t really any better on console (which is where the motte-and-bailey argument about game dev often retreats to), suffering degraded graphics in order to keep up any semblance of performance. One thread I glanced at included the savage comment “The Lord of the Rings books had better graphics”.
None of the people involved in pumping out these games seem to think that performance is a moral imperative, or that it’s morally wrong to waste cycles or memory. Yet so often it’s all the rest of us who are supposed to take lessons from them.
So I’ll outright state it: if I took to heart the way game dev does performance, I’d get fired. Casey and friends should spend some time on the beam in their own eyes before complaining about the mote in mine.
His fans perhaps. Casey himself I actually don’t know. He happens to work in the game industry, but he’s on the record about being in it for the technical challenges, not really the game themselves. And if we’re going to scold him about not doing what he’s preaching, we should take a look at what he actually did: the Granny animation system, whatever he did for The Witness… Similarly, I would look at what Mike Acton’s participated in when he worked at Insomniac Games, or what he managed to contribute for Unity.
About games wasting way too many cycles, I know the list is long. Of the top of my head, I personally played Supreme Commander, who I think demanded way more from the hardware than it had any right to, and Elite Dangerous (in VR), whose Odyssey update had unplayable performance for a while, and is still reputedly quite crap.
I would also like to know why Factorio and The Witness take so long to boot. Factorio devs cared a huge deal about performance (and improved it by a couple orders of magnitude over the years), and Jonathan Blow ranted in 2017 about Photoshop boot times, so I guess there must be some explanation? Especially from Blow: I’d like to know why his boot times are somehow acceptable, when Photoshop’s are not.
No sources on me rn, but iirc computer hardware is actually low on the pollution totem pole. Things like concrete manufacturing, steel manufacturing, agriculture, and cars swamp everything else.
I can concede that. Computer stuff being really polluting is something I’ve heard I don’t remember where, and trusted then. Maybe I shouldn’t have. I also recall some graphics and numbers on greenhouse gas emissions (from Jean Marc Jancovici), and for these no single sector seemed to dominate any other: we need to reduce everything all around.
I do suspect however the mindset responsible for computer related waste is also responsible for much of the waste pretty much everywhere else as well.
Where could one read about the STEPS project? The name is quite ungoogleable.
Oh, sorry, here it is:
It’s such a high bar. “Oh, this tool only doubles your productivity? Thats not an order of magnitude so not a silver bullet”
People really miss the point of the paper. Brooks’ argument was that programmers could make an order of magnitude difference and tools wouldn’t. People joke about 10x programmers but then quote the tool part of the paper.
https://brianmckenna.org/blog/softwerewolves
Are the 10x developers the werewolves, or… where do werewolves come in? I’m confused.
Edit: Oh, silver bullets… I’m still confused, though. The werewolves are… complexity in software?
you have to understand, at the time the book was written, science fiction and fantasy fandom was not yet mainstream, and realist, detail-oriented taxonomy of every fictional concept was not as far along. it feels silly saying it that way but … I get the feeling it might be hard to imagine what the world was like prior to the cultural victory of a certain segment of fandom, so I’m doing my best to describe what has changed.
this is all by way of saying that it wasn’t intended to be a fully worked-out metaphor. silver bullets are a thing that sounds cool. nobody cared that it’s specifically werewolves they’re usually discussed in relation to. that’s all there was to it.
and yes, software complexity was the obstacle being discussed
It’s in the original paper.
Ah, so I was confused because I hadn’t read the original paper being commented and was only replying to @puffnfresh’s blog post(‘s URL). Serves me right, but I apologize for wasting you all’s time with my confusion!
That always has been my assumption.
No, but there is significant advances when you don’t need to control everything.
Eg. rails vs flask has to be a 10x difference in productivity if you can adopt & abide by rails conventions.
Rails isn’t a silver bullet in the extreme of the term, but it is a 10x improvement within a large portion of applications (mostly form based, working within average performance requirements, data model is relational).
I believe this is a category error.
If you’ll allow me to flail around a bit: I believe that given the right kind of coding style, software development can be a form of Curry-Howard correspondence, where the coder’s understanding of the problem domain, reflected in the code, provides a rigorous, testable definition of that domain.
This means that programming can be a form of applied philosophy. Begin with people bullshitting, transition to semi-formal colloquial phrasing, then eventually complete the transition to a testable and reproducible form of domain exploration, ie, science.
Because of a lot of things, but especially the social-cognitive-performative nature of language, we’re never going to have a system of creating new sciences. If that were ever to happen, new forms of science discovery would be effectively closed off to humans. Such a tron-like world doesn’t track with me.
tl;dr - round hole, meet square peg. Qualities of hammers are not important in this discussion.