“You don’t need to introduce a network boundary as an excuse to write better code”
In theory, yes, you don’t need the network boundary to write better code but that’s just part of the problem. Keep in mind that very few organizations are able to keep the employees for a long time so a microservice architecture would help new developers (especially non-senior developers) get up to speed quickly simply because it’s a smaller codebase and it’s easier to figure out that individual part of the domain. From my experience, that makes it a lot easier to understand the domain.
You can have well-isolated codebases for distinct components without having to introduce a network boundary.
I personally believe that few will benefit from “true” microservices and the whole idea is more of a guideline than something you need to follow religiously.
Fixing cultural problems with technology is awkward for everyone. Fix root causes, and use as little tech gets the job done as you can.
If you kept track of which services were on which hosts, could you use same-host calls when possible and prevent the network boundary? And if you kept track of service dependencies, could you schedule services to run on optimal hosts (like CPU affinity) and solve that problem with your service scheduler?
Yes, you could (although there would still be a kernel context switch + serialization/deserialization).
Now you have 2 extra things (a service discovery tool with same-host prioritisation and a dependency-aware scheduler) that you have to configure correctly, test, monitor and get woken up by.
All these problems are soluble given sufficient engineering time; the question is whether burning a fortune to get there is a good use of money.
Can you point me to any projects that do this? I’d be curious to read up on them.
Take a look at projects like Zookeper or Consul. I’m not sure if they do same-host prioritisation but they are one the the most popular tools for service discovery and organization that I’m aware of.
Yeah, but same difference? You don’t need to track same host calls and avoid the network boundary to write better code either.
That’s a lot of work to make libraries more CPU intensive.
I don’t buy a significant part of this argument that seems to imply “microservices” are the the right thing if only the engineering organization knew how to build them.
Also I’m always disappointed when this is presentated as a false dichotomy of “microservice vs monolith”.
As with nearly everything, the actual answer is “rules of thumb don’t work and you need to know what you’re doing”. But because knowing what you’re doing is difficult and extremely rare, most organizations go with rules of thumb instead (or the people who know what they’re doing get overridden by those who don’t) and we end up with ridiculous scenarios like the micro vs. monolithic service “debate”.
I agree with your sentiment (gotta use our brains), sometimes rule of thumbs are pretty effective.
“Use a GC” is great advice, and is always true unless you are domain-aware enough to know when it’s not.
If you have to ask the question “should I use a GC here?”, the answer is probably yes.
Agreed. I am in favor of beginning with a monolith, and as the problem domain and implementation’s needs solidify, shaving off compartmentalized chunks of functionality into services (be they “micro” or just the standard variety).
That implies that the monolith lives on, possibly just until more isolation occurs and it shrinks to nothing, but possibly lives on forever. And that can be okay.
The big take-away being that starting with microservices is not likely to provide the same advantages that transitioning to microservices provides for a more mature product and team.
I’m in favor of writing modular code so that when the need arises it’s simply a matter of extracting the module from the monolith and writing a new main. But realistically, I’m not rewriting anything, just turning off parts of the monolith that aren’t going to run in this new, split out service.
Indeed. Sometimes what is labeled as a microservice, isn’t; and sometimes its responsibility could better be implemented as a library.
There are a lot of things that can go wrong: distributed resource sharing, choice of wrong protocol, improper state handling, etc. It’s curiously a lot easier to design a microservice incorrectly than it is to botch up a monolith.
I think in functional style many of those risks dissolve away, but also I agree.
Microservices make sense in some use cases:
I’m curious, when do you like to use micro services? Or you prefer 100% monolithic?
I’m not a microservices proponent; i agree with the article. But i don’t think that not going for a microservice architecture implies going 100% monolithic. You can have different services without going micro ;)
For instance, a common architecture for websites nowadays is having an API backend as one service/project/thing, and the webapp frontend as another service/project/thing. I think this kind of separation, without going to ridiculous lengths of separating every functionality into its own microservice, makes a lot of sense. And it’s not a 100% monolithic solution, as you have the freedom of making architectural changes to either the frontend or backend services (like change the language/framework it is written in) without affecting the other.
Yeah, there’s a big difference between a few sevices, and a few hundred services (uber style).
As patrickdlogan points out above, you don’t really have to choose, since it’s a false dichotomy. It’s only about the size and responsibilities of the service, and that metric is absolutely relative, in the context of the developing organization.
I second this. At my workplace there used to be two monolithic applications and they are getting converted into two microservices (billing and session management) and another big application (a backoffice app). It doesn’t have to be an all-or-nothing type of approach.
Hmm, I rather like crashable stateless microservices.
Also known as Erlang. Which is indeed a nice concept, but the problem arises when people have crashable stateless services without the important pieces of Erlang: supervision trees, sane message passing, process links.
Microservices as many folks implement them really are poorly specified, slow, buggy implementations of half of Erlang.
And even if you do have all those pieces, there are hard problems at the application level to think about.
Given that slightly sensational title, I thought the article was going to be some kind of diatribe based on one team getting badly burned because they didn’t realize that tools are sharp.
To the author’s credit he at least points out some of the pitfalls of microservice oriented architectures and doesn’t condemn them whole sale.