If you don’t introduce newer technology, you’ll end up with a 30-year-old COBOL application nobody can make significant changes to, and all potential new hires can and will write their own cheques.
I’m redoing parts of a system that is 20 years old and still making lots of money for its owner. It was originally written for very old versions of Windows. It’s also decoupled enough that it’s possible to change some of the components out and keep things ticking.
Twenty years old. Contrast with the sheer number of apps that get rewritten within the 2-3 year timespan. I feel very humbled by this.
Sounds similar to what I’m doing, though (EDIT) you are doing it (/EDIT) at a grander scale. Our app is ~10y old now, and was originally written by one guy, in Java, in notepad. It shows.
Over the last year, we’ve extracted pieces, replaced old strata with new technologies, and generally did this incremental improvement. It’s not a massive codebase (just under 275k lines right now, it was at around 325k when we started to tear out things into other services), but it makes a lot of money for the company.
I actually see the lava layer factor more in our deployment infrastructure than in the code, though. While the code had relatively little iteration, there have been numerous other applications deployed to our servers, and now with the new services coming out piecemeal as the dev team writes them, each ends up getting deployed a little differently (especially since we’ve had some turnover recently). It’s really interesting to see the different versions of this phenomenon – in the code it’s annoying but generally stable strata, clearly defined lines where different technology sediments collects. In the infrastructure, it’s glassy, sharp, igneous layers; jutting into each other and – occasionally – me as I try to fix them.
Thing is, only reason we’re able to make these changes is they have the concept of a message queue implemented, except it was way before it was a thing at all. Updating components consists of unhooking old ones and inserting new ones in place. No need to match implementation language or anything, even!
Best way to avoid ABI problems: engineer such that they don’t even occur in the first place.
I don’t think this is an anti-pattern. I’ve worked on this kind of codebase and had it work very well - much better than codebases that stuck to their original architecture for years.
The trick is always to remove + replace (as appropriate) the lowest layers at about the same rate you’re adding new ones. Safely sunsetting old layers is as important as introducing new + better techniques, IMO.
If you don’t introduce newer technology, you’ll end up with a 30-year-old COBOL application nobody can make significant changes to, and all potential new hires can and will write their own cheques.
I’m redoing parts of a system that is 20 years old and still making lots of money for its owner. It was originally written for very old versions of Windows. It’s also decoupled enough that it’s possible to change some of the components out and keep things ticking.
Twenty years old. Contrast with the sheer number of apps that get rewritten within the 2-3 year timespan. I feel very humbled by this.
That’s impressive. In my experience, old Windows programs are universally awful.
Sounds similar to what I’m doing, though (EDIT) you are doing it (/EDIT) at a grander scale. Our app is ~10y old now, and was originally written by one guy, in Java, in notepad. It shows.
Over the last year, we’ve extracted pieces, replaced old strata with new technologies, and generally did this incremental improvement. It’s not a massive codebase (just under 275k lines right now, it was at around 325k when we started to tear out things into other services), but it makes a lot of money for the company.
I actually see the lava layer factor more in our deployment infrastructure than in the code, though. While the code had relatively little iteration, there have been numerous other applications deployed to our servers, and now with the new services coming out piecemeal as the dev team writes them, each ends up getting deployed a little differently (especially since we’ve had some turnover recently). It’s really interesting to see the different versions of this phenomenon – in the code it’s annoying but generally stable strata, clearly defined lines where different technology sediments collects. In the infrastructure, it’s glassy, sharp, igneous layers; jutting into each other and – occasionally – me as I try to fix them.
Thing is, only reason we’re able to make these changes is they have the concept of a message queue implemented, except it was way before it was a thing at all. Updating components consists of unhooking old ones and inserting new ones in place. No need to match implementation language or anything, even!
Best way to avoid ABI problems: engineer such that they don’t even occur in the first place.
I don’t think this is an anti-pattern. I’ve worked on this kind of codebase and had it work very well - much better than codebases that stuck to their original architecture for years.
The trick is always to remove + replace (as appropriate) the lowest layers at about the same rate you’re adding new ones. Safely sunsetting old layers is as important as introducing new + better techniques, IMO.