If I had a nickel for every time someone threw a “good practice” principle at me, without understanding it, while trying to argue for doing something stupid, I wouldn’t be rich, but I could probably buy something cool.
The goal for most ‘best practice’ rules is to make you think before breaking them. No process is universally right in all situations but best practices encapsulate things that are good ideas ,ost of the time. They’re mental shortcuts so that you can say ‘I have a choice of A or B, A is considered best practice and so I would need a compelling reason to do B. Do I have one?’ If the answer is ‘no’, then you should do A. Without the rule, A and B would be treated as equally valid.
As you say, a lot of problems come from treating them as universal rules to be applied without thinking. My experience is that this is a bigger problem in teams that lack senior people and is a big problem in particular with the Silicon Valley mindset that avoids listening to anyone over 30. Knowing when and why to break rules is something that comes with experience, either from breaking them at the wrong time and suffering or from having a good mentor who helps you understand the consequences of different choices.
I don’t know that this point really saves Best Practices in general. Partly because they’re often just some bullshit someone came up with—justification optional, evidence nearly universally absent. Or, as in the cases of “don’t repeat yourself” and “do only one thing”, they’re implicitly set against a common-sense level of opposing force: don’t repeat yourself, but don’t rewrite "ho ho ho" to ("ho " * 3)[:-1]. So DRY should really be “repeat yourself when common sense dictates”, which is probably redundant advice for anyone likely to follow it.
“The more modern type of reformer goes gaily up to it and says, ‘I don’t see the use of this; let us clear it away.’ To which the more intelligent type of reformer will do well to answer: ‘If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.’”
If I had to pull out one line that captures it all neatly:
The problem with following advice to the letter is that it rarely works in practice. The problem with following it at all costs is that eventually we cannot afford to do so.
I think all of these need to be seen in context. There’s exceptions to these rules and if you take the “never rewrite anything” rule it’s just silly when you for example make an exploratory prototype.
And that advice is usually not really meant to say “just don’t do it”, but know the cost of it, know that it doesn’t magically solve all problems, know that code has ugly party, because it has to interact with reality, because it deals with edge cases, etc.
I also think that rewriting your own code is different than rewriting someone else’s. Rewriting your own code is more likely to have learnings you can draw from. There might be parts you know you can do better, but are more fundamental.
The most important part however is that it’s likely a lot bigger than expected. So many rewrites fail because of that. People forget about bad parts of a problem space, people wrongly assume that a new tech magically solves the problems without downsides, people overestimate the amount of time they have, and so on.
But then I actually think writing an (or multiple) explorative prototype(s) before writing an actual project if feasible can be an excellent idea. It means that you can really explore, understand problem spaces and try silly ideas without feeling guilty. It can lead to a production system that has fewer bad inherent design choices that cause problems later on.
I think the biggest hindrance is on the time/money side, but in some situations that might not be the limiting factor, for example in personal side projects, or in projects you use like learning exercises, recreational coding and so on.
So what I am trying to say is that one should know where that advice comes from and not see things as a dogma. They are also not rules, but advice. If you wonder whether you should rewrite your current project the answer is likely no, meaning that if you are unsure don’t do it. If you know why you do it anyways, then you won’t be asking that question.
It’s a bit like with “don’t use feature X” rules. Usually that’s the parts that are there and not removed for a reason and if you fully understand that reason you will know when to not listen to that advice. It means that you know why you would tell people not to use that feature.
The three statements are also something you will see for live production code with a company depending on it working. If you are in a different context, be it that you are just starting out or if it’s your hobby, essentially just ignore them. With some exceptions, like don’t just hop to the next big thing you find on Lobste.rs or HN. I’ve seen way too many people never progressing in learning, because they keep falling for advertisements or thinking they have to learn Rust, when they are front end web developers and they see webassembly and things like that. Just stick with something for a year. Also to really get something and also understand how it fails and its drawbacks. Usually new frameworks, languages, etc. are just new trade-offs. When starting out that’s really tough to realize and I’ve certainly been there. Make sure you finish projects and not create new construction sites is what a lot of that advice is about.
In practice, Model-View-Controller resembles a monolith with two distinct subsystems—one for the database code, another for the UI, both nestled inside the controller.
Noooo….
Not “both nestled inside the controller”. I mean people definitely appear to do it this way, and then they complain about how MVC sucks and how we need various complex alternatives.
The Model is independent. It is your “headless app”. Think hexagonal/ports-and-adapters. The UI sits on top of the model, it queries the model to display the data that’s in the model to the user and manipulates the model on behalf of the user.
Controllers are adapters from input methods (mouse, keyboard) to the Views, which then talk to the model.
A Controller provides means for user input by presenting the user with menus or other means of giving commands and data. The controller receives such user input, translates it into the appropriate messages and pass these messages onto one or more of the views.
“Pass these messages onto one or more of the views”.
That’s all the controllers do. In most of today’s (and yesterday’s) UI toolkits, that is handled generically in the toolkit. So you should rarely if ever create a Controller.
If I had a nickel for every time someone threw a “good practice” principle at me, without understanding it, while trying to argue for doing something stupid, I wouldn’t be rich, but I could probably buy something cool.
The goal for most ‘best practice’ rules is to make you think before breaking them. No process is universally right in all situations but best practices encapsulate things that are good ideas ,ost of the time. They’re mental shortcuts so that you can say ‘I have a choice of A or B, A is considered best practice and so I would need a compelling reason to do B. Do I have one?’ If the answer is ‘no’, then you should do A. Without the rule, A and B would be treated as equally valid.
As you say, a lot of problems come from treating them as universal rules to be applied without thinking. My experience is that this is a bigger problem in teams that lack senior people and is a big problem in particular with the Silicon Valley mindset that avoids listening to anyone over 30. Knowing when and why to break rules is something that comes with experience, either from breaking them at the wrong time and suffering or from having a good mentor who helps you understand the consequences of different choices.
I don’t know that this point really saves Best Practices in general. Partly because they’re often just some bullshit someone came up with—justification optional, evidence nearly universally absent. Or, as in the cases of “don’t repeat yourself” and “do only one thing”, they’re implicitly set against a common-sense level of opposing force: don’t repeat yourself, but don’t rewrite
"ho ho ho"
to("ho " * 3)[:-1]
. So DRY should really be “repeat yourself when common sense dictates”, which is probably redundant advice for anyone likely to follow it.“The more modern type of reformer goes gaily up to it and says, ‘I don’t see the use of this; let us clear it away.’ To which the more intelligent type of reformer will do well to answer: ‘If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.’”
https://en.m.wikipedia.org/wiki/G._K._Chesterton#Chesterton's_fence
If I had to pull out one line that captures it all neatly:
Delightful writeup, no notes.
I think all of these need to be seen in context. There’s exceptions to these rules and if you take the “never rewrite anything” rule it’s just silly when you for example make an exploratory prototype.
And that advice is usually not really meant to say “just don’t do it”, but know the cost of it, know that it doesn’t magically solve all problems, know that code has ugly party, because it has to interact with reality, because it deals with edge cases, etc.
I also think that rewriting your own code is different than rewriting someone else’s. Rewriting your own code is more likely to have learnings you can draw from. There might be parts you know you can do better, but are more fundamental.
The most important part however is that it’s likely a lot bigger than expected. So many rewrites fail because of that. People forget about bad parts of a problem space, people wrongly assume that a new tech magically solves the problems without downsides, people overestimate the amount of time they have, and so on.
But then I actually think writing an (or multiple) explorative prototype(s) before writing an actual project if feasible can be an excellent idea. It means that you can really explore, understand problem spaces and try silly ideas without feeling guilty. It can lead to a production system that has fewer bad inherent design choices that cause problems later on.
I think the biggest hindrance is on the time/money side, but in some situations that might not be the limiting factor, for example in personal side projects, or in projects you use like learning exercises, recreational coding and so on.
So what I am trying to say is that one should know where that advice comes from and not see things as a dogma. They are also not rules, but advice. If you wonder whether you should rewrite your current project the answer is likely no, meaning that if you are unsure don’t do it. If you know why you do it anyways, then you won’t be asking that question.
It’s a bit like with “don’t use feature X” rules. Usually that’s the parts that are there and not removed for a reason and if you fully understand that reason you will know when to not listen to that advice. It means that you know why you would tell people not to use that feature.
The three statements are also something you will see for live production code with a company depending on it working. If you are in a different context, be it that you are just starting out or if it’s your hobby, essentially just ignore them. With some exceptions, like don’t just hop to the next big thing you find on Lobste.rs or HN. I’ve seen way too many people never progressing in learning, because they keep falling for advertisements or thinking they have to learn Rust, when they are front end web developers and they see webassembly and things like that. Just stick with something for a year. Also to really get something and also understand how it fails and its drawbacks. Usually new frameworks, languages, etc. are just new trade-offs. When starting out that’s really tough to realize and I’ve certainly been there. Make sure you finish projects and not create new construction sites is what a lot of that advice is about.
Noooo….
Not “both nestled inside the controller”. I mean people definitely appear to do it this way, and then they complain about how MVC sucks and how we need various complex alternatives.
The Model is independent. It is your “headless app”. Think hexagonal/ports-and-adapters. The UI sits on top of the model, it queries the model to display the data that’s in the model to the user and manipulates the model on behalf of the user.
Controllers are adapters from input methods (mouse, keyboard) to the Views, which then talk to the model.
MODELS-VIEWS-CONTROLLERS
“Pass these messages onto one or more of the views”.
That’s all the controllers do. In most of today’s (and yesterday’s) UI toolkits, that is handled generically in the toolkit. So you should rarely if ever create a Controller.