I’ve found myself saying similar things in the past, but this left a bad taste in my mouth. Not because the ideas are really all that wrong, but because I’ve often found them to be used as an excuse for not thinking ahead.
There’s a lot of disagreement and planning for change in software development and I certainly don’t have the answers. No one really knows and I suspect the answer is something wishy-washy like “it depends”. But if my experience is any indicator, not doing it at all leads to a whole lot of work. The really hard part is being able to predict when that work will have to be done. So it seems easier to just plough ahead.
The real tension the article gets at is around flexibility. How much flexibility is too much? That’s something I wrestle with all the time. Going too far means you build something very general with a wide range of behaviour that may be really cool, but it is really hard to get right; not going far enough means things are simpler, but if you build on that and find out you missed something fundamental, you’re probably destined for an unmaintainable disaster.
Furthermore, “just code it up and ship it” isn’t always feasible. In the world of embedded systems and certain critical software, a fast release cycle might be six months. You likely can’t afford to just push it out there without a decent maintenance plan lest you paint yourself into a corner.
My current view on sofftware development is something like, “Think. Not too much. Mostly about maintenance.” (With apologies to Michael Pollan.)
A good balance might be, “allow operator changes as needed.” I think the focus on, “If you build your system in a sensible way, and make the codebase easy to change, it will dramatically lower the cost of changing it” is sound, but we also have CMSes for a reason. Nobody is disrupting wordpress by realizing that all they have to do to make it better is require redeploying your wordpress app every time you want to make a blog post.
The key to “futureproofing” seems to be, “make your codebase a joy to work in” for changes we can’t anticipate, and “provide flexibility” for changes we can anticipate (and in particular, changes that solve problems we have today).
“make your codebase a joy to work in”
I like this sentiment. I also like the what @dwc said about simplicity and orthoganality.
I think this post is presenting a caricature of “future proof” to make its argument meaningful. That isn’t to say that the code she describes doens’t exist, but that isn’t because future proofing is bad, it’s because most code is bad. And her alternative, I think, is not necessarily better (depending on how you interpret it).
Specifically, I don’t know anyone that really things “future proof” means the code will magically adjust to things in the future. Instead, I think reasonable people look at future proofing as reducing assumptions and dependencies so changing things in the future is not hugely painful. That can take many forms, but doens’t mean you shouldn’t do it at all.
Also, the hate on interfaces is unfounded, IMO. Interfaces are great. Not everything needs to be an interface, of course, but generally more things need to be interfaces than people make them. And the idea that you shouldn’t have an interface because you only have one implementer in your code is generally wrong, because there should be two implementers: the tests. This is the real value of interfaces, it makes testing so much easier.
I just saw on twitter that the author is complaining about no-one disagreeing with her on interfaces—so she might be willing to engage in a discussion :-)
Personally I found the article focusing far too much on configuration, when my experience always takes “future proof” to mean flexibility. I write a rest api endpoint that’s read only but I do so in an architecture that’s easy to add write access later. I write things modular so things can be replaced (this is akin to the interfaces bit above mentioned) without bringing down the house. I make things declarative and functional as I can even within objects so I can know behaviors are consistent and all the touch points of these things will respond how I need them to later. You get the idea.
That’s not magic, that’s good planning. It’s raised floor panels so you can rewire things later when offices move. It’s zoned air conditioning so when we move servers into room Y we can turn up the AC in there without freezing the sales team, or we put less people into the training rooms these days so turn up the heat to keep them comfy.
Plan for change, not let change wash over you like some mystical fog.
I am intrigued by this proposition:
If we visualize the effect of users actions, (highlighting potential issues), if we make it easy for the users to correct their mistakes, we’ve reduced the impact of error so much that we might not have to worry about prevention. This approach is much more future safe. If we allow users to make mistakes – they will still be able to do their job even if the rules change.
Particularly in light of this anecdote:
In Norway our parliament voted to change our criminal laws in 2005. But they have only now (2015) been put into full effect, because the police’s computer systems prevented them from applying the new rules.
PS: “kranglefant” in this blog’s url is a Norwegian word that translates to “wrangler” in English. This could explain the style of the prose :-)
Oddly enough, when I read that bit, I found the idea of letting the users correct their mistakes to precisely a form of future proofing! It’s a way to account for future changes in requirements.
Also, the problem Norwegian police’s computer systems sounds like a lack of future proofing. It might be that the reason it can’t be upgraded is that it’s mainly based on a bunch of hard-coded logic. Based on the prescription from the OP, all one needs to do is open the files and change them. Then deploy it. I know, it’s crazy! Maybe they should try it. (Yes, that’s unfair, but just as much as an oversimplification. It’s likely the case that the changes with the system are pretty drastic so not much could be done in any case. I’d wager, knowing nothing beforehand, that the huge delay is about backward compatibility and contracting issues.)