Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow.
I can attest to the advice in this article being extremely good, almost unreasonably so. Pretty much all of the best software I’ve written has been a ‘version 2’ after spending day/weeks/months exploring the design space and then throwing it all away to start again. A few personal examples of this:
Veloren’s first engine was our first attempt at writing a game engine in Rust. It superficially worked, but fell foul of many missteps and was plagued by instability, deadlocks, latency, and an abysmal concurrency model. After 9 months of development we ditched it and started from scratch. We took all of the lessons we learned writing the first one and we’ve never looked back. The new engine scales extremely well (better than almost every other voxel game out there, thanks to its highly parallel design built on top of an ECS and careful attention being paid to data access patterns), is easy to work on, is conceptually simpler, is much more versatile, and uses substantially fewer resources.
chumsky, my parser combinator library, had a relatively mundane and hacky design up until I decided to rewrite it from scratch about a year ago. I took everything I learned from the first implementation and fixed everything I could, including completely redesigning the recovery and error prioritisation system. It’s now much more powerful, can parse a far wider set of grammars, and is extremely fast (our JSON parser benchmark can often outpace hand-optimised JSON parsers)
Tao, my functional programming language (and compiler) went through several revisions that allowed me to explore the best way to design the various intermediate representations and the type solver. The type solver (which supports HM-style inference, generalised algebraic effects, generics, typeclasses, associated types, and much more) is without a shadow of a doubt the single most complex piece of software I’ve ever written, and writing it without it collapsing in on its own complexity was only possible because I’d already taken several shots at implementing it before, then consciously starting afresh.
I’d argue that consciously prototyping is not simply a nice-to-have, but an essential step in the development of any non-trivial software system and most systemic development failures have their origins in a lack of prototyping, leading to the development team simply not being aware of the shape of the problem space.
I can’t help but point out that all of your examples are Rust projects. I tend to think that language choice has a big impact on the feasibility of incrementally improving the architecture.
Back when I used to program mostly in Java and C++ (ages ago), I used to find it extremely hard to make incremental changes to the architecture of my programs. I think that was mostly due to these languages forcing me to bend over backwards. In the case of Java, it was due to how inflexible the language is and in the case of C++, it was because the concern of manual memory management permeated every design decision. The thing with bending over backwards is that you aren’t left with much room to bend any further and any foundational architectural change means mostly a complete rewrite. I suspect that Rust might be suffering from the same thing I experienced with C++.
As a counterpoint, I’ve been finding it easy to evolve the program architecture with my shifting understanding since I started writing Haskell full time a few years ago. And that’s been the case even in a half-decade-old code base written by a combination of junior programmers and short term contractors.
All of that said, no language can save your from backwards compatibility baggage. Your API, user-observable program semantics and old user data lying around all accumulate this baggage over the years and even Haskell programs grind to a halt trying to juggle all of that. The trouble is, even a total rewrite can’t save you from backwards compatibility…
I don’t think this really has much to do with the language. I tend to write Rust in a heavily functional style anyway: there’s not much Rust I write that couldn’t be trivially transpiled to Haskell. When I talk about complexity and understanding the design space, I’m not talking about more trivial syntactic choices, or even choice of abstractions available within the language: I’m talking about the fundamental architecture of the program: which systems belong where, how data structures are manipulated and abstracted, how the data I care about is represented and partitioned so as to minimise the complexity of the program as it grows, what aspects of the problem space matter and how the program might evolve as it moves to cover more use-cases. Those are factors that are largely independent of the language, and even more so for Rust/Haskell which have extremely similar feature sets.
Back when I used to program mostly in Java and C++ (ages ago), I used to find it extremely hard to make incremental changes to the architecture of my programs. I think that was mostly due to these languages forcing me to bend over backwards…
I suspect that Rust might be suffering from the same thing I experienced with C++.
As a counterpoint, I’ve been finding it easy to evolve the program architecture with my shifting understanding since I started writing Haskell full time a few years ago. And that’s been the case even in a half-decade-old code base written by a combination of junior programmers and short term contractors.
I’ve found that a strong type system is paramount to evolving the program architecture. Haskell is one of the best examples, and Rust’s type system isn’t quite as powerful but near enough for most use-cases. A rewrite of the system, or a portion of it, with a type system which guides you is paramount to safely evolving the code. Having used Rust professionally for 4 years now, it is far closer to working with Haskell than C++ or Java in terms of incremental changes due to the type system.
When using a strong type system, I imagine “evolving the program architecture” is simply “rewriting massive swathes of code to satisfy the type checker”. It’s essentially throwing out mostly everything minus some boilerplate.
The type system is the scaffolding in strongly typed languages. It allows you to refactor in a safe way, because the type checks are thousands of tests that you don’t have to write (and are far more likely to be correct).
I think a lot of the hesitation with strongly typed languages comes from the unfamiliarity with strong types (they’re a powerful tool), but
simply “rewriting massive swathes of code to satisfy the type checker”. It’s essentially throwing out mostly everything minus some boilerplate.
Couldn’t be further from the truth imo. Maybe for someone new, but with an experienced person on the team this wouldn’t happen.
If you have to re-architecture your strongly typed program it WILL cause you to rewrite LARGE portions of the program. I’m not sure what you’re talking about it being “further from the truth”. Are you assuming I have no extensive experience with type systems?
If you have to re-architecture your strongly typed program it WILL cause you to rewrite LARGE portions of the program. I’m not sure what you’re talking about it being “further from the truth”.
As with many things, it depends on the context. Re-architecture which involves changing a core invariant relied upon by the entire codebase? Yes, that will probably require rewriting a lot of code. But this is the case for any language (strongly typed or not).
In my experience, dynamically typed codebases have to rely entirely on tests to provide the scaffolding for any refactor, which vary in completeness and can be buggy. Strongly typed codebases get to rely on the type system for the scaffolding, which when done right is the equivalent of thousands of tests that are far more likely to be correct (barring compiler errors). This is night and day when it comes to a large-scale refactor, as being able to lean heavily on the type system to guide can make the difference between a refactor which Just Works and one which has a few more bugs to iron out.
At the end of the day it all comes to tradeoffs. Dynamic languages allow you to get away with hacky workarounds, whereas a strongly typed language might make that harder (eg. require punching a hole in the types, which can require advanced knowledge of the type system). But I take issue with the blanket statement that strongly typed languages require a significant amount of rewrites for any refactor – that is completely opposite of the experience I’ve had (5 years of working with Rust professionally, and a mix of C++/Python/JS for years before that).
Are you assuming I have no extensive experience with type systems?
I did not assume that originally, and was speaking from my own experience. But given your language, at this point yes I do assume you don’t have extensive experience with type systems :)
The point of my original comment is no matter the language, you will essentially have to throw everything out.
It’s too bad you have to make assumptions - your arguments would have a bit more weight to them. Really you’re just writing walls of text that say “type system make refactor easy”. The topic is throwing out projects. A re-architecture is going to ripple type system changes everywhere to the point you should throw everything out.
Haha I was just being cheeky, I don’t know you or your experience, and my arguments are standalone. We are both responding to a sub-thread about evolving the architecture, which I agree, is tangential to the original article. But… if you’re arguing that the original article talks about throwing out everything from a PoC and that is the same for any language, then yes I of course agree with that. That is kindof the point.
But just want to note, that is not what we originally were talking on this thread about (I made a comment about evolving architecture being easier based on strong typing).
This is terrible advice. Why throw it away if it sells? Your time to market just doubled for no reason other than perfectionism. Also, large parts of a prototype can be fully reused in the release. Things like writing hardware registers or interacting with the D-Bus will always look the same. Keep it modular, keep the good parts and rewrite the bad parts. Put care into the boundaries of your systems and design good interfaces, because these are harder to change. The internals can always be cleaned up with an update.
You endorse an antipattern I’ve experienced a few times which I call “productionizing the prototype.” The main issue with this pattern is not evident until years later, when the prototype’s lack of design has become apparent; the prototype tends to sprawl, spreading out and accruing responsibilities, turning into a monolith.
At multiple startups, I’ve seen this be the only mistake committed by leadership, and it was still grievous enough to alter their funding plans.
Completely agree and I’ve seen the exact same issues. Building on top of a foundation which isn’t solid will slowly grind everything to a halt (and burn out engineers). Its a difficult balance between perfectionism and pragmatism, and it can be tricky to have a proper discussion on it.
There is no difference between product and prototype. There is only code. Some code is useful in the field, some is not. Some code that is useful in the field may be the result of prototype work and can be shipped as-is. Some prototype code that is not useful in the field is, unfortunately, shipped in a release. Yes, this should be avoided. There’s no need to be overly dogmatic, the reality is more fine-granular yet simpler than the picture you’re painting.
the prototype tends to sprawl, spreading out and accruing responsibilities, turning into a monolith.
Like Linux, Windows, Facebook, etc.? Seems to work well enough then. I deeply care about good design and correctness, but let’s be honest here: It’s impossible to ship a product that’s not deeply flawed. With finite resources, all you can do is make good tradeoffs and optimize for the qualities your users care about. What are you going to cut when the release date approaches? You will face this decision at some point if you want to release anything.
I would really warn against this mindset. You seem to imply that modularising a codebase is simply a natural and easy part of development, but nothing could be further from the truth.
The boundaries that you choose for modules are constraining points that impose limitations on the long-term growth of the program. Modularity does not come for free, you’re always giving something up when you choose to partition your code. Taken to the extreme, bad assumptions about modularisation early in the life of the codebase can cripple it down the line.
This isn’t just about being a good or bad developer either: bad assumptions about the appropriate use of abstraction and modularisation absolutely plague the software industry, even for very large commercial or open-source codebases.
By developing rapid throwaway prototypes, as advised in the article, you give yourself the best opportunity to discover the most appropriate module boundaries as quickly as possible before embarking.
Many developers, when writing something they understand to be a prototype, will take shortcuts which are appropriate for a prototype, but not appropriate for something which is customer-facing. One common example is error handling. A prototype is a proof-of-concept, not a product.
Many developers, when writing something they understand to be a prototype, will take shortcuts which are appropriate for a prototype, but not appropriate for something which is customer-facing. One common example is error handling.
Which leads to the follow-on question: Are you more likely to introduce bugs by going through a prototype and adding proper error handling, or by rewriting it from scratch to have good error handling?
In my experience, it’s quite easy to mark places in the source code of a prototype where you’ve made hacks in the interest of expediency and then to audit these locations and fix them long before you want to actually ship the thing. This has the advantage that you start from something that has a useful subset of the final feature set and so you can start writing end-to-end tests early and extend them as you improve the quality of the code.
My consistent experience has been that when someone writes a bit of code, they’re working from a perspective that is effectively bimodal: either they’re doing a quick-and-dirty hack job to prove a point, or they’re writing a production-quality artifact. That perspective is usually not just reflected by specific concerns like error handling, it’s usually deeply embedded in the design of the program, in a way that makes it very difficult to incrementally “fix” a prototype and get something on the other end that was anywhere near as good as “doing it right” in the first place. YMMV.
Yeah. The article is pretty clear about what a prototype is. It’s not expected that you will prototype changing the CSS to turn a button blue. A prototype is for when you are making a new product or feature that you’re not sure how to make. The point is learn by doing so you have a blueprint for the real job.
Your comment could probably be worded with a less controversial start, but I get what you mean, and somewhat agree. This is what I call “business programming”, and despite the others’ opinions or negative sentiment towards it, the idea is what drives industry, paying everyone’s bills.
The reality is there are natural deadlines we call opportunities that a business needs to leverage in order to survive. A business doesn’t survive from code looking nice. It survives from code being able to do the job in reasonable capacity, bugs and all.
Now, code that is well done (in all aspects of the meaning), will be easier to extend, support, and live long, at a fraction of the cost. FOSS and hobby projects are able to do this because they aren’t businesses, and as we know tend to have much MUCH higher code quality!
I think too many people confuse ideals vs reality.
Does anyone have experience with this kind of advice being even so much as considered in a professional context? I can’t imagine anyone I’ve ever worked for taking such a suggestion seriously (ala the sibling post by BenjaminRi).
I can certainly attest to it not being considered, except when the first version is so clearly problematic that even VPs feel the heat. (And even so, a surprising number of programs I’ve worked on had names that ended in “_v2”. Never worked on a “_v3” though.)
I can’t stop thinking I wish we did this with my current project at work. We struggled to get the whole thing working, and after it was done I had a redesign planned out and half implemented, but we had to scrap it because we wouldn’t have had time to complete it and test it before the end of the project.
One and a half years later we got another contract to extend the project, I asked my team lead to allocate some time to rewrite it based on what I had already done but he turned it down because “we’re not getting paid to work on this part”. I was not involved in the first couple months of development other than a rough plan of the architecture, and lo and behold, we’re touching and rewriting a lot of the old code to fit the requirements of the extension.
On top of that, the new parts are async while the old ones are synchronous and we’re using tokio’s block_in_place way more than necessary, the massive “main loop” module that I had massively shrunk down in the redesign keeps growing, we adopted a new design for modules to allow some dependency injection for testing and it’s causing a lot of pains where we have to bridge the new parts to the old…
A lot of things have gone slightly badly because of the combination of factors we’re working under, and even though I can clearly see the reasons why you’d want to scrap this production prototype and instead use it to build a test suite, we have to ship and the upper rangs don’t seem to see that far ahead most of the time, or maybe they do and think that it’s fine as it is and the client will have to deal with that. Maybe it’s my fault too for not speaking loud enough, I don’t know. We’re all too young and inexperienced in this team. It is what it is.
I get this advice, I also remember it from The Mythical Man Month. It’s fine advice at a company working on huge and long projects, but as an indie developer working on much smaller projects and systems that I can keep 100% in my head and that comprise < 10KLOC …I like write it until it works and then move on and finish the project as quickly as possible.
As someone who has read some of your blog, I would say your style is more to prototype by doing a new project. Like you’re working on Daily Driver and then you just release Sparrow Solitaire or something else instead, then you feed what you learned from that into the next thing. It ends up being like doing a prototype, but the result is a game and not just a blueprint. PS I started reading your blog after buying Sparrow Solitaire. Cool game, keep up the good work!
Thanks, I’ll try to! Yours is a good summary of my output.
I rewrite little, mostly for optimisations. Time spent creating perfectly architected code that will never be seen by anybody other than me would be time I’d rather spend on the next project.
For me this article makes two main points. I agree with one but for the second I think ymmv.
The author says “throw away your first draft” but one of the critical useful pieces here is “make a first draft” and I 100% agree. I find that the sooner you get some data flowing end to end the sooner you find out those unknown unknowns. I struggle to write good software without doing this. In nerd terms I’d say that the languages I generally use don’t have interface types that describe requirements well enough. The knee-jerk reaction to then just turn the requirements into a formal spec often doesn’t actually work. So at the risk of repeating myself: build and end to end prototype or as close as you can get as soon as possible.
I don’t really have the same experience about needing to throw it away. I have the feeling the arguments presented might not apply to all developers.
I can also attest to the advice, though in my case I often make even more iterations for a project, doing something akin to a GC algorithms: I create a blank project and move over the parts that are good enough. E.g. some helper method with a small scope, or a whole component that has a well-defined task/boundary.
With that said, I would be interested in better support for incremental improvements in programming languages. Unison comes to mind that is better than the status quo based on its description, though I haven’t tried it out yet — when prototyping, or doing an n+1th version, I do want to have a close to working codebase, even if low-quality/naive, but I want to “write over” the better code, selectively using some older, some newer bits.
So, when asked to write up a project plan for a new piece of software, the first thing I do is write a quick and dirty protoype in whatever set of languages that I find comfortable and has libraries to make the job easier. Then once I have the prototype “working” I write up the design and submit the project plan and start actually coding. Without this process I will always get hung up on something unforeseen. It takes me longer to submit a plan, but the plan is much more reliable.
I don’t actually throw away my code, but I certainly don’t re-use it for anything except as a reference. It’ll probably be done entirely differently the next time anyway.
I often write the same program in a couple of languages, exactly for this reason. Additionally, it’s why I often write a (semi-) formal specification, I can then underspecify the things I’m not interested in and focus on the idea which is new to me. It’s less about prototyping and more about not being about to gloss over the details that are often messy.
– Fred Brooks, The Mythical Man Month
I can attest to the advice in this article being extremely good, almost unreasonably so. Pretty much all of the best software I’ve written has been a ‘version 2’ after spending day/weeks/months exploring the design space and then throwing it all away to start again. A few personal examples of this:
Veloren’s first engine was our first attempt at writing a game engine in Rust. It superficially worked, but fell foul of many missteps and was plagued by instability, deadlocks, latency, and an abysmal concurrency model. After 9 months of development we ditched it and started from scratch. We took all of the lessons we learned writing the first one and we’ve never looked back. The new engine scales extremely well (better than almost every other voxel game out there, thanks to its highly parallel design built on top of an ECS and careful attention being paid to data access patterns), is easy to work on, is conceptually simpler, is much more versatile, and uses substantially fewer resources.
chumsky, my parser combinator library, had a relatively mundane and hacky design up until I decided to rewrite it from scratch about a year ago. I took everything I learned from the first implementation and fixed everything I could, including completely redesigning the recovery and error prioritisation system. It’s now much more powerful, can parse a far wider set of grammars, and is extremely fast (our JSON parser benchmark can often outpace hand-optimised JSON parsers)
Tao, my functional programming language (and compiler) went through several revisions that allowed me to explore the best way to design the various intermediate representations and the type solver. The type solver (which supports HM-style inference, generalised algebraic effects, generics, typeclasses, associated types, and much more) is without a shadow of a doubt the single most complex piece of software I’ve ever written, and writing it without it collapsing in on its own complexity was only possible because I’d already taken several shots at implementing it before, then consciously starting afresh.
I’d argue that consciously prototyping is not simply a nice-to-have, but an essential step in the development of any non-trivial software system and most systemic development failures have their origins in a lack of prototyping, leading to the development team simply not being aware of the shape of the problem space.
I can’t help but point out that all of your examples are Rust projects. I tend to think that language choice has a big impact on the feasibility of incrementally improving the architecture.
Back when I used to program mostly in Java and C++ (ages ago), I used to find it extremely hard to make incremental changes to the architecture of my programs. I think that was mostly due to these languages forcing me to bend over backwards. In the case of Java, it was due to how inflexible the language is and in the case of C++, it was because the concern of manual memory management permeated every design decision. The thing with bending over backwards is that you aren’t left with much room to bend any further and any foundational architectural change means mostly a complete rewrite. I suspect that Rust might be suffering from the same thing I experienced with C++.
As a counterpoint, I’ve been finding it easy to evolve the program architecture with my shifting understanding since I started writing Haskell full time a few years ago. And that’s been the case even in a half-decade-old code base written by a combination of junior programmers and short term contractors.
All of that said, no language can save your from backwards compatibility baggage. Your API, user-observable program semantics and old user data lying around all accumulate this baggage over the years and even Haskell programs grind to a halt trying to juggle all of that. The trouble is, even a total rewrite can’t save you from backwards compatibility…
I don’t think this really has much to do with the language. I tend to write Rust in a heavily functional style anyway: there’s not much Rust I write that couldn’t be trivially transpiled to Haskell. When I talk about complexity and understanding the design space, I’m not talking about more trivial syntactic choices, or even choice of abstractions available within the language: I’m talking about the fundamental architecture of the program: which systems belong where, how data structures are manipulated and abstracted, how the data I care about is represented and partitioned so as to minimise the complexity of the program as it grows, what aspects of the problem space matter and how the program might evolve as it moves to cover more use-cases. Those are factors that are largely independent of the language, and even more so for Rust/Haskell which have extremely similar feature sets.
I’ve found that a strong type system is paramount to evolving the program architecture. Haskell is one of the best examples, and Rust’s type system isn’t quite as powerful but near enough for most use-cases. A rewrite of the system, or a portion of it, with a type system which guides you is paramount to safely evolving the code. Having used Rust professionally for 4 years now, it is far closer to working with Haskell than C++ or Java in terms of incremental changes due to the type system.
When using a strong type system, I imagine “evolving the program architecture” is simply “rewriting massive swathes of code to satisfy the type checker”. It’s essentially throwing out mostly everything minus some boilerplate.
The type system is the scaffolding in strongly typed languages. It allows you to refactor in a safe way, because the type checks are thousands of tests that you don’t have to write (and are far more likely to be correct).
I think a lot of the hesitation with strongly typed languages comes from the unfamiliarity with strong types (they’re a powerful tool), but
Couldn’t be further from the truth imo. Maybe for someone new, but with an experienced person on the team this wouldn’t happen.
If you have to re-architecture your strongly typed program it WILL cause you to rewrite LARGE portions of the program. I’m not sure what you’re talking about it being “further from the truth”. Are you assuming I have no extensive experience with type systems?
As with many things, it depends on the context. Re-architecture which involves changing a core invariant relied upon by the entire codebase? Yes, that will probably require rewriting a lot of code. But this is the case for any language (strongly typed or not).
In my experience, dynamically typed codebases have to rely entirely on tests to provide the scaffolding for any refactor, which vary in completeness and can be buggy. Strongly typed codebases get to rely on the type system for the scaffolding, which when done right is the equivalent of thousands of tests that are far more likely to be correct (barring compiler errors). This is night and day when it comes to a large-scale refactor, as being able to lean heavily on the type system to guide can make the difference between a refactor which Just Works and one which has a few more bugs to iron out.
At the end of the day it all comes to tradeoffs. Dynamic languages allow you to get away with hacky workarounds, whereas a strongly typed language might make that harder (eg. require punching a hole in the types, which can require advanced knowledge of the type system). But I take issue with the blanket statement that strongly typed languages require a significant amount of rewrites for any refactor – that is completely opposite of the experience I’ve had (5 years of working with Rust professionally, and a mix of C++/Python/JS for years before that).
I did not assume that originally, and was speaking from my own experience. But given your language, at this point yes I do assume you don’t have extensive experience with type systems :)
The point of my original comment is no matter the language, you will essentially have to throw everything out.
It’s too bad you have to make assumptions - your arguments would have a bit more weight to them. Really you’re just writing walls of text that say “type system make refactor easy”. The topic is throwing out projects. A re-architecture is going to ripple type system changes everywhere to the point you should throw everything out.
Haha I was just being cheeky, I don’t know you or your experience, and my arguments are standalone. We are both responding to a sub-thread about evolving the architecture, which I agree, is tangential to the original article. But… if you’re arguing that the original article talks about throwing out everything from a PoC and that is the same for any language, then yes I of course agree with that. That is kindof the point.
But just want to note, that is not what we originally were talking on this thread about (I made a comment about evolving architecture being easier based on strong typing).
This is terrible advice. Why throw it away if it sells? Your time to market just doubled for no reason other than perfectionism. Also, large parts of a prototype can be fully reused in the release. Things like writing hardware registers or interacting with the D-Bus will always look the same. Keep it modular, keep the good parts and rewrite the bad parts. Put care into the boundaries of your systems and design good interfaces, because these are harder to change. The internals can always be cleaned up with an update.
You endorse an antipattern I’ve experienced a few times which I call “productionizing the prototype.” The main issue with this pattern is not evident until years later, when the prototype’s lack of design has become apparent; the prototype tends to sprawl, spreading out and accruing responsibilities, turning into a monolith.
At multiple startups, I’ve seen this be the only mistake committed by leadership, and it was still grievous enough to alter their funding plans.
Completely agree and I’ve seen the exact same issues. Building on top of a foundation which isn’t solid will slowly grind everything to a halt (and burn out engineers). Its a difficult balance between perfectionism and pragmatism, and it can be tricky to have a proper discussion on it.
There is no difference between product and prototype. There is only code. Some code is useful in the field, some is not. Some code that is useful in the field may be the result of prototype work and can be shipped as-is. Some prototype code that is not useful in the field is, unfortunately, shipped in a release. Yes, this should be avoided. There’s no need to be overly dogmatic, the reality is more fine-granular yet simpler than the picture you’re painting.
Like Linux, Windows, Facebook, etc.? Seems to work well enough then. I deeply care about good design and correctness, but let’s be honest here: It’s impossible to ship a product that’s not deeply flawed. With finite resources, all you can do is make good tradeoffs and optimize for the qualities your users care about. What are you going to cut when the release date approaches? You will face this decision at some point if you want to release anything.
I would really warn against this mindset. You seem to imply that modularising a codebase is simply a natural and easy part of development, but nothing could be further from the truth.
The boundaries that you choose for modules are constraining points that impose limitations on the long-term growth of the program. Modularity does not come for free, you’re always giving something up when you choose to partition your code. Taken to the extreme, bad assumptions about modularisation early in the life of the codebase can cripple it down the line.
This isn’t just about being a good or bad developer either: bad assumptions about the appropriate use of abstraction and modularisation absolutely plague the software industry, even for very large commercial or open-source codebases.
By developing rapid throwaway prototypes, as advised in the article, you give yourself the best opportunity to discover the most appropriate module boundaries as quickly as possible before embarking.
Many developers, when writing something they understand to be a prototype, will take shortcuts which are appropriate for a prototype, but not appropriate for something which is customer-facing. One common example is error handling. A prototype is a proof-of-concept, not a product.
Which leads to the follow-on question: Are you more likely to introduce bugs by going through a prototype and adding proper error handling, or by rewriting it from scratch to have good error handling?
In my experience, it’s quite easy to mark places in the source code of a prototype where you’ve made hacks in the interest of expediency and then to audit these locations and fix them long before you want to actually ship the thing. This has the advantage that you start from something that has a useful subset of the final feature set and so you can start writing end-to-end tests early and extend them as you improve the quality of the code.
My consistent experience has been that when someone writes a bit of code, they’re working from a perspective that is effectively bimodal: either they’re doing a quick-and-dirty hack job to prove a point, or they’re writing a production-quality artifact. That perspective is usually not just reflected by specific concerns like error handling, it’s usually deeply embedded in the design of the program, in a way that makes it very difficult to incrementally “fix” a prototype and get something on the other end that was anywhere near as good as “doing it right” in the first place. YMMV.
Yeah. The article is pretty clear about what a prototype is. It’s not expected that you will prototype changing the CSS to turn a button blue. A prototype is for when you are making a new product or feature that you’re not sure how to make. The point is learn by doing so you have a blueprint for the real job.
Your comment could probably be worded with a less controversial start, but I get what you mean, and somewhat agree. This is what I call “business programming”, and despite the others’ opinions or negative sentiment towards it, the idea is what drives industry, paying everyone’s bills.
The reality is there are natural deadlines we call opportunities that a business needs to leverage in order to survive. A business doesn’t survive from code looking nice. It survives from code being able to do the job in reasonable capacity, bugs and all.
Now, code that is well done (in all aspects of the meaning), will be easier to extend, support, and live long, at a fraction of the cost. FOSS and hobby projects are able to do this because they aren’t businesses, and as we know tend to have much MUCH higher code quality!
I think too many people confuse ideals vs reality.
I have been this unconsciously for a long time.
I write code, I don’t finish it, come back, waste 1 month trying to understand it, then rewrite it from scratch.
Does anyone have experience with this kind of advice being even so much as considered in a professional context? I can’t imagine anyone I’ve ever worked for taking such a suggestion seriously (ala the sibling post by BenjaminRi).
I can certainly attest to it not being considered, except when the first version is so clearly problematic that even VPs feel the heat. (And even so, a surprising number of programs I’ve worked on had names that ended in “_v2”. Never worked on a “_v3” though.)
Writing is rewriting.
I can’t stop thinking I wish we did this with my current project at work. We struggled to get the whole thing working, and after it was done I had a redesign planned out and half implemented, but we had to scrap it because we wouldn’t have had time to complete it and test it before the end of the project.
One and a half years later we got another contract to extend the project, I asked my team lead to allocate some time to rewrite it based on what I had already done but he turned it down because “we’re not getting paid to work on this part”. I was not involved in the first couple months of development other than a rough plan of the architecture, and lo and behold, we’re touching and rewriting a lot of the old code to fit the requirements of the extension.
On top of that, the new parts are async while the old ones are synchronous and we’re using tokio’s
block_in_place
way more than necessary, the massive “main loop” module that I had massively shrunk down in the redesign keeps growing, we adopted a new design for modules to allow some dependency injection for testing and it’s causing a lot of pains where we have to bridge the new parts to the old…A lot of things have gone slightly badly because of the combination of factors we’re working under, and even though I can clearly see the reasons why you’d want to scrap this production prototype and instead use it to build a test suite, we have to ship and the upper rangs don’t seem to see that far ahead most of the time, or maybe they do and think that it’s fine as it is and the client will have to deal with that. Maybe it’s my fault too for not speaking loud enough, I don’t know. We’re all too young and inexperienced in this team. It is what it is.
I get this advice, I also remember it from The Mythical Man Month. It’s fine advice at a company working on huge and long projects, but as an indie developer working on much smaller projects and systems that I can keep 100% in my head and that comprise < 10KLOC …I like write it until it works and then move on and finish the project as quickly as possible.
As someone who has read some of your blog, I would say your style is more to prototype by doing a new project. Like you’re working on Daily Driver and then you just release Sparrow Solitaire or something else instead, then you feed what you learned from that into the next thing. It ends up being like doing a prototype, but the result is a game and not just a blueprint. PS I started reading your blog after buying Sparrow Solitaire. Cool game, keep up the good work!
Thanks, I’ll try to! Yours is a good summary of my output.
I rewrite little, mostly for optimisations. Time spent creating perfectly architected code that will never be seen by anybody other than me would be time I’d rather spend on the next project.
For me this article makes two main points. I agree with one but for the second I think ymmv.
The author says “throw away your first draft” but one of the critical useful pieces here is “make a first draft” and I 100% agree. I find that the sooner you get some data flowing end to end the sooner you find out those unknown unknowns. I struggle to write good software without doing this. In nerd terms I’d say that the languages I generally use don’t have interface types that describe requirements well enough. The knee-jerk reaction to then just turn the requirements into a formal spec often doesn’t actually work. So at the risk of repeating myself: build and end to end prototype or as close as you can get as soon as possible.
I don’t really have the same experience about needing to throw it away. I have the feeling the arguments presented might not apply to all developers.
I can also attest to the advice, though in my case I often make even more iterations for a project, doing something akin to a GC algorithms: I create a blank project and move over the parts that are good enough. E.g. some helper method with a small scope, or a whole component that has a well-defined task/boundary.
With that said, I would be interested in better support for incremental improvements in programming languages. Unison comes to mind that is better than the status quo based on its description, though I haven’t tried it out yet — when prototyping, or doing an n+1th version, I do want to have a close to working codebase, even if low-quality/naive, but I want to “write over” the better code, selectively using some older, some newer bits.
So, when asked to write up a project plan for a new piece of software, the first thing I do is write a quick and dirty protoype in whatever set of languages that I find comfortable and has libraries to make the job easier. Then once I have the prototype “working” I write up the design and submit the project plan and start actually coding. Without this process I will always get hung up on something unforeseen. It takes me longer to submit a plan, but the plan is much more reliable.
I don’t actually throw away my code, but I certainly don’t re-use it for anything except as a reference. It’ll probably be done entirely differently the next time anyway.
I often write the same program in a couple of languages, exactly for this reason. Additionally, it’s why I often write a (semi-) formal specification, I can then underspecify the things I’m not interested in and focus on the idea which is new to me. It’s less about prototyping and more about not being about to gloss over the details that are often messy.