It’s an interesting point. As moses insightfully pointed out, it has a core meme in common with “worse is better”. But I read as much as I could handle of the worse-is-better essays, and I don’t recall them suggesting that getting a working implementation in place first has the value of letting you explore the problem space.
Thinking of it this way, it’s clear that component boundaries are difficult to dictate for a problem that hasn’t been solved yet - especially if, as often happens, there are competing personalities who create an incentive to draw the interfaces so that people don’t have to confront their technical opinion differences. When you have a working system, you can reasonably refactor the boundaries of your microservices; when you are still building it on a tight deadline and none of it works yet, that’s difficult to sell anyone on.
I feel like worse-is-better was ahead of its time; if it had come a few decades later, it could have talked about agile and waterfall and extreme programming and scrum and various methodologies and their respective drawbacks. As far as I can recall, it didn’t consider methodology at all; it had this notion that there’s this one best way to make a given piece of software, dictated solely by its engineering characteristics. That was always why I found it naive.
Feel free to disagree! It really has been a long time since I read the material.
So, there’s another place where monolith-first kinda makes sense: embedded installations.
(speaking from my own bitter experience moving into enterprise healthcare from a land where, you know, we can just do things and ship)
In a hospital, you can’t just spin up new VMs and containers and whatnot every time you need to–their IT/IS departments are basically hell-bent on maintaining the status-quo of technology (many are old enough that they entered their institution as “operators”). Similarly, in order to assuage concerns over The HIPAA ™ and The Cybersecurity ™, they don’t really want to use off-site resources.
So, you end up having to badger them and get some VMs, or if you’re really lucky, bare-metal installs on whatever hardware they’ve got a support contract for. The bigger the institution, the more bullshit of this nature you run into. One place we’ve dealt with, for example, quoted us something like 1K/gb for NAS storage (because logistics and salaries and ~reasons~). And networking, VLANs, IMPI access…? Urgh.
So, in those settings, setting up a microservices architecture can be hard: SOA require good ops, and good ops requires the ability to do systems administration work quickly, and in some environments, that just doesn’t happen.
I’m puzzled why he didn’t talk much about in-process-style microservices. If you know you have some discrete modules but don’t need to separate them by process/machine then you get the benefits of decoupling without deployment issues.
What’s the difference between “in-process microservices” and “regular old modular code?”
It’s a lot easier to get a conference talk proposal accepted about the former than the latter.
None. Sorry, I’m mobile and didn’t make that clear.
This is a corollary to “worse is better”, so it’s useful for the same reasons that “worse is better” is useful, and painful for the same reasons “worse is better” is painful.
This assumes monoliths are strictly worse then microservices, which they aren’t.
Monoliths have a simple operational model, a very narrow interface (some just communicate over one port), are clear of management overhead of code sharing between multiple components. The upper ceiling of their debugging complexity is lower.
Just because they are not the model du jour, it doesn’t mean they are bad. They are perfectly fitting for a lot of projects, I would even say the larger percentage of them.
I mean “worse” in the sense of the the New Jersey style, not actually “worse”. It’s often quite good, and as you and the article point out, there are often big benefits to preferring a simple implementation.