I think choosing not to make life difficult for those who come after us is a professional trait. That may include sticking to a reduced, but standardized, tool set.
After the development phase, software projects often go into maintenance mode, where a rotating cast of temp contractors is brought in to make necessary tweaks. The time you save by building a gloriously elegant automaton must be weighed against the cumulative time all of them must spend deciphering how the system works.
I think not just regulation, but effectively implemented regulation.
I’ve worked in several regulated industries or on systems where there are industry standards like PCI that need to be followed/applied and things are really no better. In fact, regulations can sometimes cause more problems - the rigorous testing/validation requirements mean that once a system is productive, it’s not patched because of the onerous testing requirement (testing that should, ideally, be automated but just isn’t in most organisations).
Yes, that comes down to organisational practises, but we really should be in a better place in 2016 - Sarbanes-Oxley has helped in a lot of areas with things like segregation of duties, proper record keeping, etc, but it’s only a drop in the ocean.
Do you agree that limiting our tools to reduce churn is a good approach? Why or why not?
All other things equal, yes. Maciej Ceglowski [0]:
I believe that relying on very basic and well-understood technologies at the architectural level forces you to save all your cleverness and new ideas for the actual app, where it can make a difference to users.
I think many developers (myself included) are easily seduced by new technology and are willing to burn a lot of time rigging it together just for the joy of tinkering. So nowadays we see a lot of fairly uninteresting web apps with very technically sweet implementations. In designing Pinboard, I tried to steer clear of this temptation by picking very familiar, vanilla tools wherever possible so I would have no excuse for architectural wank.
I complain about frontend engineers and their magpie tendencies, but backend engineers have the same affliction, and its name is Architectural Wank. This theme of brutally limiting your solution space for non-core problems is elaborated on further in “Choose Boring Technology” [1]:
Let’s say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while. You might get a few more after you achieve a certain level of stability and maturity, but the general tendency is to overestimate the contents of your wallet. Clearly this model is approximate, but I think it helps.
If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use MongoDB, you just spent one of your innovation tokens. If you choose to use service discovery tech that’s existed for a year or less, you just spent one of your innovation tokens. If you choose to write your own database, oh god, you’re in trouble.
“All other things equal” is one hell of a caveat, though :)
I’m a huge fan of the healthy skepticism both Dan McKinley and Maciej exhibit when it comes to technology decisions. When something passes the high bar for making a technology change, though, make that change! Inertia is not a strategy.
2.a: Take diversity seriously. Don’t act like raging testosterone poisoned homophobic ethnophobic nits just because we’ve been able to get away with it in the past.
2.b: Work to cleanly separate requirements and the best tools to satisfy them in the least amount of time from our desire to play with new toys all the time.
2.c: Stop putting $OTHER language down all the time because we see it as old/lame/too much boilerplate/badly designed. If people are doing real useful work in it, clearly it has value. Full stop.
Those would be a good start.
3: See 2.b - I think saying “Let’s limit our tools” is too broad a statement to be useful. Let’s work to keep our passions within due bounds and try to make cold hard clinical decisions about the tools we use and recognize that if we want to run off and play with FORTH because it’s back in vogue, that’s totally cool (there’s all kinds of evidence that this is a good thing for programmers in any number of ways) but that perhaps writing the next big project at work in it is a mistake.
What do you think it would mean to be “professional* in software engineering?
Our stupid divisions and churn come partly from employers and partly from our own crab mentality as engineers.
They come from employers insofar as most people in hiring positions have no idea how to hire engineers nor a good sense of how easily we can pick up new technologies, so they force us into tribal boxes like “data scientist” and “Java programmer”. They force us into identifying with technological choices that ought to be far more prosaic (“I’m a Spaces programmer; fuck all you Tabs mouth-breathers”). This is amplified by our own tribalism as well as our desire to escape our low status in the corporate world coupled with a complete inability to pull it off– that is, crab mentality.
What do you think we have to do to achieve (1)?
I’ve written at length on this and I don’t think my opinions are secret. :)
Do you agree that limiting our tools to reduce churn is a good approach? Why or why not?
I’m getting tired of the faddishness of the industry, but I don’t think that trashing all new ideas just because they’re “churn” is a good idea either. New ideas that are genuinely better should replace the old ones. The problem is that our industry is full of low-skill young programmers and technologies/management styles designed around their limitations, and it’s producing a lot of churn that isn’t progress but just new random junk to learn that really doesn’t add new capabilities for a serious programmer.
I’m getting tired of the faddishness of the industry, but I don’t think that trashing all new ideas just because they’re “churn” is a good idea either
I agree, I completely agree. I absolutely understand that it is foolish to adopt new tech before it has developed good tooling (and developed, as someone pointed out in a comments section somewhere, a robust bevy of answers of Stack Overflow). You’re just making your developers' lives harder. Still, trashing new ideas is also silly, for a very good reason.
I think that the argument ignores genuine advances in technology. In the article, Java is likened to a screwdriver. Sure, throwing away a screwdriver for a hammer is nonsensical tribalism, but throwing away a screwdriver for a power drill isn’t. There will be times when I want to explicitly write to buffers – I’ll use C or C++ as needed. But why would I otherwise pick a language that segfaults, when advances in language design and compiler theory have yielded Rust, which may well do the same thing*?
It might cost more in the short term to tear down the wooden bridge and build a concrete bridge. Heck it might cost more in the long term to do so, if concrete is more expensive to maintain (I acknowledge my analogy is getting a tad overwrought.) But aren’t better guarantees about the software you produce worth it?
For the record, I’m not trying to speak as a Rust evangelist here – it’s just a topic I know about that fits the argument. It’s new, it’s still developing its tooling, but it clearly represents progress in programming language theory.
For another example, imagine if the people in the argument used vim. Vim is robust and powerful – but many people consider it a poor choice of tool for Java development. How would I convince this person to switch from vim to IntelliJ. Isn’t IntelliJ just another example of churn? It’s a new shiny tool, right? Thoughtful consideration of new stuff is required to distinguish between “churn” and “hey maybe we can move on from the dark ages.”
I don’t want to be accused of talking past the author. I think that the author would agree with an underlying point – that whichever language, IDE, framework you choose, you should choose with a good understanding of what your tool can do, and what the alternatives are.
*I mean, it might not do the same thing – you might want blazing speed or something else that C provides that Rust does not yet. So, yeah, choose your tools wisely.
I’m having a hard time identifying which of the conversants I’m supposed to listen to. Italics? Don’t Socratic dialogues usually work the opposite way though, with the fool playing the other part?
It almost certainly doesn’t. I write some very functional-looking C#, which is not usually considered a functional language. Of course now we can all get into the pedantics of what exactly functional programming is. To which I’d answer that technically, a purely functional program has never been run.
I also write a bunch of things in FP which look like anemic objects. It’s very intentional and gives rise to very useful kinds of organizational principles. They’re just two different ways of organizing things and are actually quite complementary.
That said, I’d get into a much larger argument about whether or not a pure functional program has ever been run ;)
I have nothing to back this up, so obviously it’s totally legit.
I think that we (as the human race) only achieve things after there is a baseline knowledge across the entire community for any given topic. When something fairly new comes along (computing or EE for example) we make huge strides in advancing the field, then things dwindle off and we “churn” until the base knowledge of the population includes what ever tech was just introduced. Once the knowledge becomes ubiquitous, people start to see it (the technology) differently and are able to reason about it differently. This allows for more advancements. /me mumbles something about not being able to solve a problem with the same type of thinking…
I guess a TL;DR kinda example is the relationship between a parent and a child:
parent: Don’t touch the stove, it’s hot!
child: /touch TSSSSSSYEAHOUCH!
parent: Told ya so!
Sure the kid would have saved time and advanced the human race by taking the parental advice… but then they wouldn’t actually know!
This article also made me wonder what the man hours put into software development vs. WoW look like.
I have tremendous respect for Uncle Bob and this article is no exception, but man he needs a proof reader :)
Also unfortunately, he comes off like a grumpy Java programmer who’s forlorn about the fact that his tool set of choice isn’t getting the buzz he’d like.
From my perspective the solution to this very real problem is simple. Play with and embrace the new where it makes sense but don’t reject the old just because it’s no longer shiny.
I’ve given this a lot of thought recently. It may come across as cynical, but I’ve come to the believe that the reason our industry seems to progress so slowly is that it’s not our industry, so it doesn’t go in the direction we want it to go.
When it comes to persuading a large group of people to write software for you, the money you pay them is but one of the tools at your disposal. Another really powerful one is the ability to give said group of people the autonomy to entertain themselves, solving puzzles of their own, learning (the hard way) all the lessons of the past.
I think that’s a big part of why younger workers are preferred as well: the software comes out cheaper not so much due to lower salaries (though it certainly helps), but because less experienced developers care more about their own amusement than about the thoroughness of their artifacts, so they consider far less and move on to the next problem as fast as they can.
One could counter-argue that true professionals would make cost/benefit analyses instead of treating every project as NASA-level software, but even then they would always show bias towards their profession and longer term. I think the incentives of younger engineers who are very much worried about their own short term far better align with the incentives of most people who pay for software as a means to an end.
I don’t know Bob Martin, I haven’t read “Clean Code”, but every time I read one of these blog posts, I cannot help but think (1) that this style of fake conversation is really bad at both conveying a clear point and addressing the issues, (2) that he’s either disconnected or his salary depends on his not knowing.
So, instead of just collectively grumping about Uncle Bob, let’s pick out a piece of his article that’s actually worth discussing.
What do you think it would mean to be “professional* in software engineering?
What do you think we have to do to achieve (1)?
Do you agree that limiting our tools to reduce churn is a good approach? Why or why not?
I think choosing not to make life difficult for those who come after us is a professional trait. That may include sticking to a reduced, but standardized, tool set.
After the development phase, software projects often go into maintenance mode, where a rotating cast of temp contractors is brought in to make necessary tweaks. The time you save by building a gloriously elegant automaton must be weighed against the cumulative time all of them must spend deciphering how the system works.
[Comment removed by author]
I think not just regulation, but effectively implemented regulation.
I’ve worked in several regulated industries or on systems where there are industry standards like PCI that need to be followed/applied and things are really no better. In fact, regulations can sometimes cause more problems - the rigorous testing/validation requirements mean that once a system is productive, it’s not patched because of the onerous testing requirement (testing that should, ideally, be automated but just isn’t in most organisations).
Yes, that comes down to organisational practises, but we really should be in a better place in 2016 - Sarbanes-Oxley has helped in a lot of areas with things like segregation of duties, proper record keeping, etc, but it’s only a drop in the ocean.
All other things equal, yes. Maciej Ceglowski [0]:
I complain about frontend engineers and their magpie tendencies, but backend engineers have the same affliction, and its name is Architectural Wank. This theme of brutally limiting your solution space for non-core problems is elaborated on further in “Choose Boring Technology” [1]:
[0] https://web.archive.org/web/20111228005908/http://www.readwriteweb.com/hack/2011/02/pinboard-creator-maciej-ceglow.php
[1] http://mcfunley.com/choose-boring-technology
“All other things equal” is one hell of a caveat, though :)
I’m a huge fan of the healthy skepticism both Dan McKinley and Maciej exhibit when it comes to technology decisions. When something passes the high bar for making a technology change, though, make that change! Inertia is not a strategy.
2.a: Take diversity seriously. Don’t act like raging testosterone poisoned homophobic ethnophobic nits just because we’ve been able to get away with it in the past.
2.b: Work to cleanly separate requirements and the best tools to satisfy them in the least amount of time from our desire to play with new toys all the time. 2.c: Stop putting $OTHER language down all the time because we see it as old/lame/too much boilerplate/badly designed. If people are doing real useful work in it, clearly it has value. Full stop.
Those would be a good start.
3: See 2.b - I think saying “Let’s limit our tools” is too broad a statement to be useful. Let’s work to keep our passions within due bounds and try to make cold hard clinical decisions about the tools we use and recognize that if we want to run off and play with FORTH because it’s back in vogue, that’s totally cool (there’s all kinds of evidence that this is a good thing for programmers in any number of ways) but that perhaps writing the next big project at work in it is a mistake.
Our stupid divisions and churn come partly from employers and partly from our own crab mentality as engineers.
They come from employers insofar as most people in hiring positions have no idea how to hire engineers nor a good sense of how easily we can pick up new technologies, so they force us into tribal boxes like “data scientist” and “Java programmer”. They force us into identifying with technological choices that ought to be far more prosaic (“I’m a Spaces programmer; fuck all you Tabs mouth-breathers”). This is amplified by our own tribalism as well as our desire to escape our low status in the corporate world coupled with a complete inability to pull it off– that is, crab mentality.
I’ve written at length on this and I don’t think my opinions are secret. :)
I’m getting tired of the faddishness of the industry, but I don’t think that trashing all new ideas just because they’re “churn” is a good idea either. New ideas that are genuinely better should replace the old ones. The problem is that our industry is full of low-skill young programmers and technologies/management styles designed around their limitations, and it’s producing a lot of churn that isn’t progress but just new random junk to learn that really doesn’t add new capabilities for a serious programmer.
I agree, I completely agree. I absolutely understand that it is foolish to adopt new tech before it has developed good tooling (and developed, as someone pointed out in a comments section somewhere, a robust bevy of answers of Stack Overflow). You’re just making your developers' lives harder. Still, trashing new ideas is also silly, for a very good reason.
I think that the argument ignores genuine advances in technology. In the article, Java is likened to a screwdriver. Sure, throwing away a screwdriver for a hammer is nonsensical tribalism, but throwing away a screwdriver for a power drill isn’t. There will be times when I want to explicitly write to buffers – I’ll use C or C++ as needed. But why would I otherwise pick a language that segfaults, when advances in language design and compiler theory have yielded Rust, which may well do the same thing*?
It might cost more in the short term to tear down the wooden bridge and build a concrete bridge. Heck it might cost more in the long term to do so, if concrete is more expensive to maintain (I acknowledge my analogy is getting a tad overwrought.) But aren’t better guarantees about the software you produce worth it?
For the record, I’m not trying to speak as a Rust evangelist here – it’s just a topic I know about that fits the argument. It’s new, it’s still developing its tooling, but it clearly represents progress in programming language theory.
For another example, imagine if the people in the argument used vim. Vim is robust and powerful – but many people consider it a poor choice of tool for Java development. How would I convince this person to switch from vim to IntelliJ. Isn’t IntelliJ just another example of churn? It’s a new shiny tool, right? Thoughtful consideration of new stuff is required to distinguish between “churn” and “hey maybe we can move on from the dark ages.”
I don’t want to be accused of talking past the author. I think that the author would agree with an underlying point – that whichever language, IDE, framework you choose, you should choose with a good understanding of what your tool can do, and what the alternatives are.
*I mean, it might not do the same thing – you might want blazing speed or something else that C provides that Rust does not yet. So, yeah, choose your tools wisely.
I’m having a hard time identifying which of the conversants I’m supposed to listen to. Italics? Don’t Socratic dialogues usually work the opposite way though, with the fool playing the other part?
Bobbo punked himself.
Yeah, AFAICT, Bob’s opinion is in italics.
God this opinion is asinine.
I’m not sure if it is his, see tedu’s comment.
Oh, I’m not saying it’s Uncle Bob’s comment—just venting that it gets expressed a lot and I don’t think it holds much if any water.
It almost certainly doesn’t. I write some very functional-looking C#, which is not usually considered a functional language. Of course now we can all get into the pedantics of what exactly functional programming is. To which I’d answer that technically, a purely functional program has never been run.
I also write a bunch of things in FP which look like anemic objects. It’s very intentional and gives rise to very useful kinds of organizational principles. They’re just two different ways of organizing things and are actually quite complementary.
That said, I’d get into a much larger argument about whether or not a pure functional program has ever been run ;)
I have nothing to back this up, so obviously it’s totally legit.
I think that we (as the human race) only achieve things after there is a baseline knowledge across the entire community for any given topic. When something fairly new comes along (computing or EE for example) we make huge strides in advancing the field, then things dwindle off and we “churn” until the base knowledge of the population includes what ever tech was just introduced. Once the knowledge becomes ubiquitous, people start to see it (the technology) differently and are able to reason about it differently. This allows for more advancements. /me mumbles something about not being able to solve a problem with the same type of thinking…
I guess a TL;DR kinda example is the relationship between a parent and a child:
Sure the kid would have saved time and advanced the human race by taking the parental advice… but then they wouldn’t actually know!
This article also made me wonder what the man hours put into software development vs. WoW look like.
I have tremendous respect for Uncle Bob and this article is no exception, but man he needs a proof reader :)
Also unfortunately, he comes off like a grumpy Java programmer who’s forlorn about the fact that his tool set of choice isn’t getting the buzz he’d like.
From my perspective the solution to this very real problem is simple. Play with and embrace the new where it makes sense but don’t reject the old just because it’s no longer shiny.
oliver steele’s the ide divide (from 2004, but still relevant) is a more nuanced look at the tradeoffs involved.
Thanks, I really enjoyed this article. Helped me understand why I don’t enjoy IDEs even though I acknowledge their power
I’ve given this a lot of thought recently. It may come across as cynical, but I’ve come to the believe that the reason our industry seems to progress so slowly is that it’s not our industry, so it doesn’t go in the direction we want it to go.
When it comes to persuading a large group of people to write software for you, the money you pay them is but one of the tools at your disposal. Another really powerful one is the ability to give said group of people the autonomy to entertain themselves, solving puzzles of their own, learning (the hard way) all the lessons of the past.
I think that’s a big part of why younger workers are preferred as well: the software comes out cheaper not so much due to lower salaries (though it certainly helps), but because less experienced developers care more about their own amusement than about the thoroughness of their artifacts, so they consider far less and move on to the next problem as fast as they can.
One could counter-argue that true professionals would make cost/benefit analyses instead of treating every project as NASA-level software, but even then they would always show bias towards their profession and longer term. I think the incentives of younger engineers who are very much worried about their own short term far better align with the incentives of most people who pay for software as a means to an end.
I don’t know Bob Martin, I haven’t read “Clean Code”, but every time I read one of these blog posts, I cannot help but think (1) that this style of fake conversation is really bad at both conveying a clear point and addressing the issues, (2) that he’s either disconnected or his salary depends on his not knowing.