“Boring stacks” also often have another advantage: common problems, like say installing such-and-such a dependency in such-and-such an environment, have often been found and solved by people before you. So not only is the primary documentation normally quite mature, but the “secondary” documentation (google search results for many common problems that come up) is also quite good.
The one risk of a boring stack is if it’s boring, but an evolutionary dead end. This type of framework is productive, in a sense, but still a pain because they are often abandoned and migration to a new framework is difficult. Java has a ton of these, Cocoon being my personal bête noir.
Software done “evolving” doesn’t usually mean it’s abandoned. I wish more people would recognize that and stop re-inventing new alternatives to old software just because they haven’t been constantly bloated with shiny new features.
Sure, or small portability fixes to work on a new version of an OS or library. But these aren’t evolutionary and don’t require throwing away an entire codebase because it’s old. Just patch it or, worst case, fork it and maintain a patched version (things like the non-djb version of qmail, and Rails LTS come to mind).
I think people keep looking for new alternatives because programming feels so unsatisfying, like it should somehow be better. Software release is something that I really want to see integrated in development, and yet every new language or API punts on release, because intuition is telling us that it’s in the wrong layer. Some of that bloaty-shiny “research” is suggesting otherwise – for example, sandstorm would be impossible if not for nodejs’s bloat.
A framework that is popular evolves, even if slowly, and even if it’s only to new versions of its base language – which is a good idea even if you’re very conservative, because you likely want to take advantage of new advances in deployment, debugging, and metrics, even on legacy products. In addition you can hit a point where you can’t conveniently implement a feature because your stack is EOL and the new feature requires newer language or infrastructure.
Also, it’s worth noting that it’s easy to have hindsight bias: it is easy to say “well, Apache Struts is a boring stack” because it’s still being released… but Apache Cocoon was around at the same time, probably looked like the “boring” option in 2008, and yet never got a real release after that. You can’t meaningfully differentiate “boring” and “abandoned” ahead of time.
Something that I’ve noticed to have changed over the past 10 years is the interaction of “old/reliable vs. new/shiny” and real engineers vs. commodity programmers.
In 2006, the employer-compliant commodity programmers wrote in Java. You knew who they were. They wrote VisitorVibratorFactory classes in IDEs and stopped learning new things around age 24. The real engineers used Python and Ruby and Haskell and Lisp (Clojure hadn’t been invented yet).
In 2016, the corporate-compliant commodity programmers zip around from one tool to another, barely understanding any of them. This isn’t a knock on the tools, but on the attitude. They use Agile and believe in some kindergarten nonsense called “user stories”. They have Docker and Kubernetes and Kafka and React.js and ActiveWrecker (typo intentional) on their CVs but can only describe the one feature of each tool that they used. Meanwhile, the real engineers tend to focus on a smaller set of “boring” tools and learn them well.
Tool preferences used to be a shibboleth that one could use to separate real programmers from the chaff. If someone knew Lisp or Haskell, you knew that you could hire this person. These days, the corporate programmers (who now toil in open-plan Agile shops designed for age discrimination, instead of boring, “slow” companies that did a simple job and did it well) have figured out the shibboleths and those litmus tests no longer work. So you have to test for deep knowledge rather than mere taste, and that’s harder because it means you, as the interviewer, have to have a depth of knowledge or else you’re stabbing in the dark.
I don’t think you can separate “real engineers” from “commodity programmers” in many cases. There are a lot of well-established places that have the same people for 15 years but are now giving their employees the opportunity to try out new languages, environments, frameworks, libraries, etc. In many cases, those engineers aren’t going to be learning every feature of every language etc in depth. That doesn’t make them “commodity programmers.”
I don’t think there was ever a time where hiring someone who knew lisp or haskell was as sure-fire as you’re making it out. I also think that calling user stories “kindergarten nonsense” shows more about you and why you feel this way than it does about the gigantic umbrella of programmers you’re looking down upon.
I think it’s pretty obvious why ‘new/shiny’ has become so important across most industries focusing on web. It’s the web itself. It’s been changing incredibly fast. What you can do inside a browser now is vastly different than what you could do in a browser even 10 years ago. On the backend side, there are more options to try more things. Whether you get interested in all of the various ideas or not is irrelevant, but it doesn’t make people who do get into them some sort of scourge.
I’ve worked at 2 startups (currently at one) and 4 corporate gigs. The corporate ones, BY FAR, have the least interested people in doing anything new. Or anything at all, for that matter.
They use Agile and believe in some kindergarten nonsense called “user stories”.
What exactly is wrong with the organization aspect of Agile? I find user stories, the task board, and sprints to be incredibly useful for keeping a project in scope and on track.
That said, where I work we use some features of Agile differently than most shops, and are definitely not “Agile” if you compare our workflow with the manifesto, so maybe my experience with Agile is fundamentally different than most developers'.
If you were to attack the strain on developers when more work than is reasonable is put into a sprint, or if deadlines on sprints are so concrete that working overtime is considered before pushing the deadline or moving the work into the next sprint, I would probably agree. But that’s more a problem of bad management than a bad system.
Regardless of whether you are using Agile, or “Waterfall” (does anyone actually consciously use Waterfall?), or some other method, or no method at all, bad managers will still be bad managers. You’ll still be expected to work overtime for them, either at the sprint deadline with Agile, or at the project deadline with Waterfall, or whenever upper management puts the pressure on with any other methodology.
One consequence of always having to chase the shiny new thing is that often good programming practices are not center stage. Robert Martin speaks of this in .Net Rocks and he also has a blog post where he says this continuous relearning requires massive amounts of time and effort and does not pay off very well in terms of extra productivity.
As a contractor though, I think you need to “ride the waves”. The highest paying gigs are always on the shiny new thing…
“Boring stacks” also often have another advantage: common problems, like say installing such-and-such a dependency in such-and-such an environment, have often been found and solved by people before you. So not only is the primary documentation normally quite mature, but the “secondary” documentation (google search results for many common problems that come up) is also quite good.
The one risk of a boring stack is if it’s boring, but an evolutionary dead end. This type of framework is productive, in a sense, but still a pain because they are often abandoned and migration to a new framework is difficult. Java has a ton of these, Cocoon being my personal bête noir.
Software done “evolving” doesn’t usually mean it’s abandoned. I wish more people would recognize that and stop re-inventing new alternatives to old software just because they haven’t been constantly bloated with shiny new features.
I am super onboard with this sentiment, but there’s one area where this doesn’t work: security fixes.
Sure, or small portability fixes to work on a new version of an OS or library. But these aren’t evolutionary and don’t require throwing away an entire codebase because it’s old. Just patch it or, worst case, fork it and maintain a patched version (things like the non-djb version of qmail, and Rails LTS come to mind).
Absolutely.
I think people keep looking for new alternatives because programming feels so unsatisfying, like it should somehow be better. Software release is something that I really want to see integrated in development, and yet every new language or API punts on release, because intuition is telling us that it’s in the wrong layer. Some of that bloaty-shiny “research” is suggesting otherwise – for example, sandstorm would be impossible if not for nodejs’s bloat.
A framework that is popular evolves, even if slowly, and even if it’s only to new versions of its base language – which is a good idea even if you’re very conservative, because you likely want to take advantage of new advances in deployment, debugging, and metrics, even on legacy products. In addition you can hit a point where you can’t conveniently implement a feature because your stack is EOL and the new feature requires newer language or infrastructure.
Also, it’s worth noting that it’s easy to have hindsight bias: it is easy to say “well, Apache Struts is a boring stack” because it’s still being released… but Apache Cocoon was around at the same time, probably looked like the “boring” option in 2008, and yet never got a real release after that. You can’t meaningfully differentiate “boring” and “abandoned” ahead of time.
For front end things, yes, you can say that. I really don;t envy front end UX designers with how they keep having to jump JS bandwagons.
But the layers underneath on the stack are not a “dead end” just because they haven’t needed a bug fix in a while.
Something that I’ve noticed to have changed over the past 10 years is the interaction of “old/reliable vs. new/shiny” and real engineers vs. commodity programmers.
In 2006, the employer-compliant commodity programmers wrote in Java. You knew who they were. They wrote VisitorVibratorFactory classes in IDEs and stopped learning new things around age 24. The real engineers used Python and Ruby and Haskell and Lisp (Clojure hadn’t been invented yet).
In 2016, the corporate-compliant commodity programmers zip around from one tool to another, barely understanding any of them. This isn’t a knock on the tools, but on the attitude. They use Agile and believe in some kindergarten nonsense called “user stories”. They have Docker and Kubernetes and Kafka and React.js and ActiveWrecker (typo intentional) on their CVs but can only describe the one feature of each tool that they used. Meanwhile, the real engineers tend to focus on a smaller set of “boring” tools and learn them well.
Tool preferences used to be a shibboleth that one could use to separate real programmers from the chaff. If someone knew Lisp or Haskell, you knew that you could hire this person. These days, the corporate programmers (who now toil in open-plan Agile shops designed for age discrimination, instead of boring, “slow” companies that did a simple job and did it well) have figured out the shibboleths and those litmus tests no longer work. So you have to test for deep knowledge rather than mere taste, and that’s harder because it means you, as the interviewer, have to have a depth of knowledge or else you’re stabbing in the dark.
I don’t think you can separate “real engineers” from “commodity programmers” in many cases. There are a lot of well-established places that have the same people for 15 years but are now giving their employees the opportunity to try out new languages, environments, frameworks, libraries, etc. In many cases, those engineers aren’t going to be learning every feature of every language etc in depth. That doesn’t make them “commodity programmers.”
I don’t think there was ever a time where hiring someone who knew lisp or haskell was as sure-fire as you’re making it out. I also think that calling user stories “kindergarten nonsense” shows more about you and why you feel this way than it does about the gigantic umbrella of programmers you’re looking down upon.
I think it’s pretty obvious why ‘new/shiny’ has become so important across most industries focusing on web. It’s the web itself. It’s been changing incredibly fast. What you can do inside a browser now is vastly different than what you could do in a browser even 10 years ago. On the backend side, there are more options to try more things. Whether you get interested in all of the various ideas or not is irrelevant, but it doesn’t make people who do get into them some sort of scourge.
I’ve worked at 2 startups (currently at one) and 4 corporate gigs. The corporate ones, BY FAR, have the least interested people in doing anything new. Or anything at all, for that matter.
What exactly is wrong with the organization aspect of Agile? I find user stories, the task board, and sprints to be incredibly useful for keeping a project in scope and on track.
That said, where I work we use some features of Agile differently than most shops, and are definitely not “Agile” if you compare our workflow with the manifesto, so maybe my experience with Agile is fundamentally different than most developers'.
If you were to attack the strain on developers when more work than is reasonable is put into a sprint, or if deadlines on sprints are so concrete that working overtime is considered before pushing the deadline or moving the work into the next sprint, I would probably agree. But that’s more a problem of bad management than a bad system.
Regardless of whether you are using Agile, or “Waterfall” (does anyone actually consciously use Waterfall?), or some other method, or no method at all, bad managers will still be bad managers. You’ll still be expected to work overtime for them, either at the sprint deadline with Agile, or at the project deadline with Waterfall, or whenever upper management puts the pressure on with any other methodology.
One consequence of always having to chase the shiny new thing is that often good programming practices are not center stage. Robert Martin speaks of this in .Net Rocks and he also has a blog post where he says this continuous relearning requires massive amounts of time and effort and does not pay off very well in terms of extra productivity.
As a contractor though, I think you need to “ride the waves”. The highest paying gigs are always on the shiny new thing…
The Churn post was posted here previously, here is the discussion: https://lobste.rs/s/1pylbt/churn