I firmly believe a suite of interacting microservices represents one of the worst technical designs in a world where there are many particularly bad options. Microservices are harder to observe, harder to operate, impose rigid boundaries that could make change management harder, and almost always perform worse than the monolith they replace. You should almost never chose them for any perceived technical reasons!
Where microservices might just suck less than monoliths is organizationally, something that this article doesn’t even touch on – which is honestly infuriating, because it’s easy to build a straw argument for/against on technical grounds, but IMO those barely matter.
Your monolith probably was great when it had n people hacking on it, because they could all communicate with each other and sync changes relatively painlessly. Now you’ve grown (congrats!) and you have 3n, or god help you 10n engineers. Merging code is a nightmare. Your QA team is growing exponentially to keep up, but regression testing is still taking forever. Velocity is grinding to a halt and you can’t ship new features, so corners get cut. Bugs are going into production, and you introduce a SRE team whose job is to hold the pager and try to insulate your engineers from their own pain. Product folks are starting to ask you for newfangled things that make you absolutely cringe when you think about implementing them.
You could try to solve this by modularizing your monolith, and that might be the right decision. You land with tight coupling between domains and a shared testsuite/datastore/deployment process, which could be OK for your use case. Might not, though, in which case you have to deal with tech that’s slower and harder to observe but might actually let your dev teams start shipping again.
Yeah, services in general (not just “microservices”) are an organizational thing. It’s basically leaning into Conway’s Law and saying that the chart of components in the software was going to end up reflecting the org chart anyway, so why not just be explicit about doing that?
You could try to solve this by modularizing your monolith, and that might be the right decision. You land with tight coupling between domains and a shared testsuite/datastore/deployment process, which could be OK for your use case. Might not, though, in which case you have to deal with tech that’s slower and harder to observe but might actually let your dev teams start shipping again.
There’s a third option here: several monoliths. Or just regular, non-micro services. Not as great as one monolith, but scales better than one, technically better than microservices, and transitions more easily into them if you really need them.
This is my first port of call after “plain ol’ PostgreSQL/Django” stop cutting it. Some people say services should be “small enough and no smaller”, but I think “big enough and no bigger” is closer to the right way to think about it.
I wonder if that is true. I used to be a firm believer of what you are saying and have said the same thing in my own words.
However, I am not so sure anymore. The reason is that a lot of the organizational efforts people make are only made when people run microservices, when I don’t see anything preventing from the same (in terms of goals) efforts would be taken with monoliths.
I’ve seen companies shift from monoliths to microservices and they usually end up initially having the same problems and then work around it. There’s usually huge process changes, because switching to microservices alone seemed to make things even worse for a while. Another part of those switches tends to be people being granted time to have good interfaces and here is where I wonder if people are partially mislead.
While microservices to some degree enforce sane interfaces - unless they don’t and your organization still creates a mess - on a technical level nothing hinders you from creating those sane interfaces and boundaries within a monolith. From a technical perspective whether you call a function, a REST API, use RPC, etc. doesn’t make a difference only that that the standard function call is usually the most reliable.
Of course this is anecdotal, but it happened more than once that “preparing a monolith for migrating to microservices” resulted in all the benefits being reaped. Of course it’s not likely that anyone stops there. The usual mindset is “our goal was to use microservices, we won’t stop one step before that when all the hard stuff is already done”.
But there’s a lot more complexity added than just going over HTTP. Adding more moving parts as you mention bring a lot of issues with it.
In other words. I am on the fence. I agree with you, it matches what I have seen, but given that I see first companies at least partly converting microservices to monoliths again for various reasons and simply keeping concerns separate - something that good software engineering also should do - I wonder if it wouldn’t make sense to find a way to organize monoliths like microservices to lower the complexity. Maybe this could be implemented like a pattern, maybe code analysis could help, maybe new programming paradigms, maybe a new or modern way of modularization.
But then again I think it might simply depend on company culture or even the particular team implementing a project. People work differently, use different tools, frameworks, languages, different ways of time and project management work best for different people. So maybe that’s what it boils down to. So maybe just don’t listen to people telling you that you NEED to use one or the other. There’s enough of highly successful projects and companies out there going completely opposite directions.
While microservices to some degree enforce sane interfaces - unless they don’t and your organization still creates a mess - on a technical level nothing hinders you from creating those sane interfaces and boundaries within a monolith. From a technical perspective whether you call a function, a REST API, use RPC, etc. doesn’t make a difference only that that the standard function call is usually the most reliable.
Don’t forget that the other problems - shared persistence layer, shared deployment, etc. are still there.
As a junior I got to see up close some of the problems a big monolith can present: it was about mid-year and we had to do a pretty big launch that crossed huge parts of that monolith by Q4 to win a (fairly huge for us!) contract. It was going to take many deployments, tons of DB migrations, and an effort spanning multiple teams. We all knew our initial “good plan” wasn’t going to work; we just didn’t know how badly it’d be off. The architects and leads all argued over the path, but we basically realized that we were trying to condense 3 quarters of work into 2.
We pulled it off, but it sucked:
All the other work had to be paused: didn’t matter if it was in a completely unrelated area, we didn’t have the QA bandwidth to be sure the changes were good, and we could not risk a low priority change causing a rollback & kicking out other work
We deployed as much as we could behind feature flags, but QA was consistently short of time to test the new work, so we shipped tons of bugs
We had to pay customers credits because we gave up on our availability SLAs to eke out a few more release windows
We had to relax a ton of DB consistency – I can’t remember how many ALTER TABLE DROP CONSTRAINTs our DBAs ran, but it was a lot. This + the above lead to data quality issues …
… which lead to us hitting our target, but with broken software; we basically hit pause on the next two months of work for the DBAs and devs to go back and pick up the broken pieces
Much of our problem came about because we had one giant ball of mud on top of one ball of mud database; if we’d been working on an environment that had been decomposed along business domains that had been well thought out and not evolved, we might’ve been fine.
Or we might’ve still been screwed because even with clean separation between teams, and the ability to independently work on changes, we still were deploying a single monolith - which meant all our DB changes / releases had to go together. Dunno.
But then again I think it might simply depend on company culture or even the particular team implementing a project. People work differently, use different tools, frameworks, languages, different ways of time and project management work best for different people. So maybe that’s what it boils down to. So maybe just don’t listen to people telling you that you NEED to use one or the other. There’s enough of highly successful projects and companies out there going completely opposite directions.
^^ – the best two words any programmer can say are “it depends”, and that goes double for big architectural questions.
i like microservices because when one fails you can debug that individual component while leaving the others running. sometimes you can do this with monolith designs, but not typically, in my experience.
That’s usually a very small advantage compared to the loss of coherent stack traces, transactions and easy local debugging. Once you have a Microservices you have a distributed system and debugging interactions between Microservices is orders of magnitude harder than debugging function calls in monoliths
Yes, this is what I jokingly call the law of conservation of complexity. Your app has got to what it has got to do and this by itself brings a certain amount of interaction and intertwining. That does does not magically go away if you cut up the monolith into pieces that do the same thing. You just move it to another layer. For some problems this makes things easier, for others it does not.
I’m a huge fan of the concept of conserved complexity. I find that it particularly shines when evaluating large changes. I’ll often get a proposal listing all the ways some project will reduce complexity. However, if they can’t tell me where the complexity is going, it’s clear they haven’t thought things through enough. It always has to go somewhere.
I’m genuinely curious if people who claim microservices solve an organisational problem have actually worked somewhere where they have been used for a few years.
It was so painful to try and get three or four separate teams to work on their services in order to get a feature out. All changes need to be in backwards compatible steps to avoid downtime (no atomic deployment) and anything that needed an interface change was extremely painful. Lets not even get into anything that needed a data migration.
A lot of places get around this pain by always creating new services instead of modifying old ones, and there is a lot of duplication. It’s not a ball of mud, it’s much worse.
The idea that you have to communicate or work together less because you’re using microservices is… I’ll be kind here… flawed.
IME everything slows down to a crawl after a few years.
I’ve been through this and where I see things slowing to a crawl, it’s where the teams and their connections with the other teams are weak - and the organisational priorities are conflicting.
This happens with multiple teams working on different parts of a monolith.
With microservices, we get to avoid everyone _ else_ being affected as much as they would have. This is Conway again. We can fix the teams. We can grow the teams (in maturity and capability). We can fix the organisational boundaries that interfere with communications. We can align priorities.
Al of the above needed to happen anyway with a monolith, but we used to have hundreds - sometimes thousands - of people being stuck or firefighting because there were some teams unable to collaborate on a feature.
Feature teams are a great answer to this general problem, but they are hard to make happen where there are huge pieces of tech that require esoteric skillsets and non-transferable skills (‘I only want to write C#’).
I’m seeing developers enjoying picking up new languages, tools, and concepts, and I’m seeing testers become developers and architects, and us actually getting some speed to market with exceptional quality.
This isn’t because of microservices. It’s because the organisation needed to be refactored.
Microservices aren’t what we need. They are slightly wrong in many ways, technically, but we now build with functions, topics, queues (FIFO where we need to avoid race conditions: not all distributed systems problems are hard to solve), step functions, block storage (with a querying layer!) - and other brilliant tools that we wouldn’t have been able to refactor towards if we hadn’t moved to microservices - or something else - first.
I’ve spent the past 12 years working on a service implemented as a bunch of microservices. At first, the Corporation started a project that required interfacing to an SS7 network, and not having the talent in-house, outsourced the development to write a service that just accepted requests via the SS7 network, and forward them to another component to handle the business logic. The computers running the SS7 network required not only specialized hardware, but proprietary software as well. Very expensive, but since the work this program did was quite small, hardware was minimized, compared to the hardware to run the business logic (and the outsourced team was eventually hired as full time employees).
A few years down the road, and now we need to support SIP. Since we already had a service interfacing with SS7, it was just easier to implement a service to interface with SIP and have it talk to the same backend that the SS7 service talked to.
Mind you, it’s the same team (the team I’m on) that is responsible for all three components. Benefits: changes to the business logic don’t require changes to the incoming interfaces (for the most part—we haven’t had to mess with the SS7 interface for several years now for example). Also, we don’t need to create two different versions of our business logic (one for SS7, which requires proprietary libraries, and one for SIP). It has worked out quite well for us. I think it also helps in that we have only one customer (one of the Oligarchic Cell Phone Companies) we have to support.
Where microservices might just suck less than monoliths is organizationally
that depends on your perspective. I remain convinced that the primary function of microservices is to isolate the members of a laboring force such that they cannot form a strong union. That is, take Conway’s Law and reverse it; create a policy to specifically -introduce- separation between workers and they won’t have a reason to talk, which makes it less likely that they’ll unionize. In that framing, the primary function of microservices to prevent programmers from unionizing.
Truly, people around me (as far as I can notice, including public figures covered by media) tend not to think about communicating effectively with others and instead tend to vilify them and otherwise avoid having the conversations necessary for further progress.
Perhaps it’s just a simple fact that most people are not trained in communication and IT people specifically have not had that much hands-on experience to compensate. Not that rest of the population were that much better at it (on average).
In short, I wouldn’t attribute the phenomena to malice. I think that IT people are not unionizing simply because that means talking to people, which is (on average) exhausting and hard.
if increasing the number of developers meant you can’t ship new features and corners got cut then you need to reduce the number of developers. the shapes of organizations can change, they must. it’s important for us to fight for changing them for material reasons like reducing complexity and friction.
making the monolith modular is a good example of when we realize solving the real problem is very hard so we solve a different problem instead. problem is we didn’t have that second problem. and in fact maybe it makes our real problem (reality) harder to solve in the future.
Microservices serve to introduce artificial boundaries that you cannot cheat around, like you might in a monolith. And it’s the sum of those little “one-time” boundary breaks that make old monoliths unmaintainable. Moving the boundaries to network makes you do it right every time. No more of those little breaks that make the monoliths unmaintainable. But the boundary doesn’t have to be network, it just has to be unbreakable. While this could be achieved with strict code review, I would be interested in seeing something that could be automatically enforced in language level.
I beg to disagree. At work, we use microservices extensively, and you end up with the wierdest boundaries. For example, we created an entire service to store a boolean per user, where the only operations are “set” and “get”, and very little scope for change.
Or you have an example where you have a central kind of entity lives in one service, and the lifecycle for that same entity is managed by another.
Or there was the case where we had one core service, driving 1) creating a message in the message store, 2) creating a new file in the file service, 3) binding that message to the new file, 4) finalizing the file, then 5) telling another service to upload that batch file. All that core service cares about was “this message needs to go out sometime”, so that should have been the interface. But, you know, that’s easy to say in hindsight, and when you’re not under time pressure.
Unless you have folks who have learned in an environment where it’s cheap and easy to restructure your boundaries, and you give folks enough time to design things well, then there’s a greater risk that you ossify bad interfaces. The (reasonble) trendency to have disjoint datastores per service, without being easily able to migrate data between services only makes this worse in my experience.
I’d say “no true Scotsman developer does this!” but they do, oh god do they ever.
I want to pick on this specific example which is very much lived experience for me:
Or you have an example where you have a central kind of entity lives in one service, and the lifecycle for that same entity is managed by another.
DDD talks about bounded contexts, which group different parts of your business together. Sales, support, and reporting all care about “customers” – but they care about very different parts of that customer. If your microservice crosses bounded contexts, you’re in for hurt.
Inside that bounded context are aggregates. Inside the sales context you might talk about a customer’s orders – maybe you have a customer, one or more ordered items, and a shipment date. But that’s an aggregate that should accept business messages: “create an order”, “void an order”, “approve an order”, (…). The functionality and data should live together. If you have a microservice for each piece, the burning will never stop.
The problems I’ve seen with microservice design are almost always because - beating this drum again - developers select them for perceived technical reasons: I can use MongoDB here and it’s fast, I can use serverless and let it scale, whatever. And when you don’t do the proper business analysis around what you’re building, you end up building shiny little things that work great when you run ab against them in dev but fall flat on their face when - like your “get/set a boolean service” - introducing them means adding a network boundary for zero gain.
What I meant is that it makes you perform the communication between the components without cheating every time. Not that those components make sense. Both monoliths and microservices do nothing to stop you dividing things in stupid ways, but microservices at least force you to divide them, instead of allowing you to access whatever and externalizing things that were meant to stay internal. Any interface design is still better than none, and in monoliths, you can pass by for a good while with zero time spent on designing the interfaces between the components.
Oh, and I don’t like microservices. I very much enjoy a small set of well engineered deci-services over a ton of microservices.
The problem is that people end up still “cheating” their way through boundaries, mostly by simply throwing the initial design over board and having it take other roles by exceptions thereby complicating things.
It all sounds nice in theory and I used to be a huge proponent of microservices, but the reality in both small and big companies is that in cases where it works fine - good engineering, design, separation of concerns, etc. - the same thing could have been done in a monolith.
A lot of the benefits only hold as long as they would if you took the same care with monoliths, only that you usually end up having to take care of the additional complexity of dealing with microservices.
At least the disregard for initial design is explicit, in both ends, just by it being described as some sort of API. Often in a monolith you can’t even tell what was intended to be internal and what was not, and similarly, you can’t really tell if the usage of another component is done in an intended way or not.
And yes, monoliths, or coarsely separated services are better than microservices IMO.
The main benefit of microservices to me, is independent deployability. Which in turn can reduce the amount of intra or inter-team coordination required. (There are certainly other ways to do that, such as lots of tiny war files in Apache Tomcat, but that’s by the by right now).
The article kinda touches on this in terms of team autonomy, but when you have a team of folks working on different facets of the product, that align well with your service boundaries, then just not having to worry about contending deployments is huge.
Conversely to the backend microservices we had, for a long time our internal web frontend was effectively a modular-ish monolith. It worked okay, but again, it had it’s challenges. Deployment was often done in batches, and if you weren’t careful, you could end up with a week’s worth. And if one of those changes needed to be rolled back, quite often the week’s worth of changes would be rolled back with it. It’s less bad if you can fix forward, or create a backout pull request quickly, but that’s not always doable.
Never mind the waste, and unrealised value represented by those un-released changes.
I wonder if the software development community zeitgeist had just accepted the Perl, Php and Ruby world they were born into, a lot of pain could had been avoided. But, a lot of cool, fun stuff would have been missed out on. The dilemma of non-committee driven progress I suppose. I do write this as a person who somehow miraculously, rewrote a large, mission critical Perl codebase (really without the experience to qualify it) to a new language. Not that it matters much. But I don’t have a dog in the fight for Perl, php or Ruby is what I mean.
… until you do.
I firmly believe a suite of interacting microservices represents one of the worst technical designs in a world where there are many particularly bad options. Microservices are harder to observe, harder to operate, impose rigid boundaries that could make change management harder, and almost always perform worse than the monolith they replace. You should almost never chose them for any perceived technical reasons!
Where microservices might just suck less than monoliths is organizationally, something that this article doesn’t even touch on – which is honestly infuriating, because it’s easy to build a straw argument for/against on technical grounds, but IMO those barely matter.
Your monolith probably was great when it had
n
people hacking on it, because they could all communicate with each other and sync changes relatively painlessly. Now you’ve grown (congrats!) and you have3n
, or god help you10n
engineers. Merging code is a nightmare. Your QA team is growing exponentially to keep up, but regression testing is still taking forever. Velocity is grinding to a halt and you can’t ship new features, so corners get cut. Bugs are going into production, and you introduce a SRE team whose job is to hold the pager and try to insulate your engineers from their own pain. Product folks are starting to ask you for newfangled things that make you absolutely cringe when you think about implementing them.You could try to solve this by modularizing your monolith, and that might be the right decision. You land with tight coupling between domains and a shared testsuite/datastore/deployment process, which could be OK for your use case. Might not, though, in which case you have to deal with tech that’s slower and harder to observe but might actually let your dev teams start shipping again.
Yeah, services in general (not just “microservices”) are an organizational thing. It’s basically leaning into Conway’s Law and saying that the chart of components in the software was going to end up reflecting the org chart anyway, so why not just be explicit about doing that?
There’s a third option here: several monoliths. Or just regular, non-micro services. Not as great as one monolith, but scales better than one, technically better than microservices, and transitions more easily into them if you really need them.
This is my first port of call after “plain ol’ PostgreSQL/Django” stop cutting it. Some people say services should be “small enough and no smaller”, but I think “big enough and no bigger” is closer to the right way to think about it.
I wonder if that is true. I used to be a firm believer of what you are saying and have said the same thing in my own words.
However, I am not so sure anymore. The reason is that a lot of the organizational efforts people make are only made when people run microservices, when I don’t see anything preventing from the same (in terms of goals) efforts would be taken with monoliths.
I’ve seen companies shift from monoliths to microservices and they usually end up initially having the same problems and then work around it. There’s usually huge process changes, because switching to microservices alone seemed to make things even worse for a while. Another part of those switches tends to be people being granted time to have good interfaces and here is where I wonder if people are partially mislead.
While microservices to some degree enforce sane interfaces - unless they don’t and your organization still creates a mess - on a technical level nothing hinders you from creating those sane interfaces and boundaries within a monolith. From a technical perspective whether you call a function, a REST API, use RPC, etc. doesn’t make a difference only that that the standard function call is usually the most reliable.
Of course this is anecdotal, but it happened more than once that “preparing a monolith for migrating to microservices” resulted in all the benefits being reaped. Of course it’s not likely that anyone stops there. The usual mindset is “our goal was to use microservices, we won’t stop one step before that when all the hard stuff is already done”.
But there’s a lot more complexity added than just going over HTTP. Adding more moving parts as you mention bring a lot of issues with it.
In other words. I am on the fence. I agree with you, it matches what I have seen, but given that I see first companies at least partly converting microservices to monoliths again for various reasons and simply keeping concerns separate - something that good software engineering also should do - I wonder if it wouldn’t make sense to find a way to organize monoliths like microservices to lower the complexity. Maybe this could be implemented like a pattern, maybe code analysis could help, maybe new programming paradigms, maybe a new or modern way of modularization.
Or in other words, when even people who making a living off Kubernetes and microservices say that Monoliths are the Future I’d at least step back and consider.
But then again I think it might simply depend on company culture or even the particular team implementing a project. People work differently, use different tools, frameworks, languages, different ways of time and project management work best for different people. So maybe that’s what it boils down to. So maybe just don’t listen to people telling you that you NEED to use one or the other. There’s enough of highly successful projects and companies out there going completely opposite directions.
Don’t forget that the other problems - shared persistence layer, shared deployment, etc. are still there.
As a junior I got to see up close some of the problems a big monolith can present: it was about mid-year and we had to do a pretty big launch that crossed huge parts of that monolith by Q4 to win a (fairly huge for us!) contract. It was going to take many deployments, tons of DB migrations, and an effort spanning multiple teams. We all knew our initial “good plan” wasn’t going to work; we just didn’t know how badly it’d be off. The architects and leads all argued over the path, but we basically realized that we were trying to condense 3 quarters of work into 2.
We pulled it off, but it sucked:
All the other work had to be paused: didn’t matter if it was in a completely unrelated area, we didn’t have the QA bandwidth to be sure the changes were good, and we could not risk a low priority change causing a rollback & kicking out other work
We deployed as much as we could behind feature flags, but QA was consistently short of time to test the new work, so we shipped tons of bugs
We had to pay customers credits because we gave up on our availability SLAs to eke out a few more release windows
We had to relax a ton of DB consistency – I can’t remember how many
ALTER TABLE DROP CONSTRAINT
s our DBAs ran, but it was a lot. This + the above lead to data quality issues …… which lead to us hitting our target, but with broken software; we basically hit pause on the next two months of work for the DBAs and devs to go back and pick up the broken pieces
Much of our problem came about because we had one giant ball of mud on top of one ball of mud database; if we’d been working on an environment that had been decomposed along business domains that had been well thought out and not evolved, we might’ve been fine.
Or we might’ve still been screwed because even with clean separation between teams, and the ability to independently work on changes, we still were deploying a single monolith - which meant all our DB changes / releases had to go together. Dunno.
^^ – the best two words any programmer can say are “it depends”, and that goes double for big architectural questions.
i like microservices because when one fails you can debug that individual component while leaving the others running. sometimes you can do this with monolith designs, but not typically, in my experience.
That’s usually a very small advantage compared to the loss of coherent stack traces, transactions and easy local debugging. Once you have a Microservices you have a distributed system and debugging interactions between Microservices is orders of magnitude harder than debugging function calls in monoliths
Yes, this is what I jokingly call the law of conservation of complexity. Your app has got to what it has got to do and this by itself brings a certain amount of interaction and intertwining. That does does not magically go away if you cut up the monolith into pieces that do the same thing. You just move it to another layer. For some problems this makes things easier, for others it does not.
See also “law of requisite variety” in cybernetics.
I’m a huge fan of the concept of conserved complexity. I find that it particularly shines when evaluating large changes. I’ll often get a proposal listing all the ways some project will reduce complexity. However, if they can’t tell me where the complexity is going, it’s clear they haven’t thought things through enough. It always has to go somewhere.
I’m genuinely curious if people who claim microservices solve an organisational problem have actually worked somewhere where they have been used for a few years.
It was so painful to try and get three or four separate teams to work on their services in order to get a feature out. All changes need to be in backwards compatible steps to avoid downtime (no atomic deployment) and anything that needed an interface change was extremely painful. Lets not even get into anything that needed a data migration.
A lot of places get around this pain by always creating new services instead of modifying old ones, and there is a lot of duplication. It’s not a ball of mud, it’s much worse.
The idea that you have to communicate or work together less because you’re using microservices is… I’ll be kind here… flawed.
IME everything slows down to a crawl after a few years.
I’ve been through this and where I see things slowing to a crawl, it’s where the teams and their connections with the other teams are weak - and the organisational priorities are conflicting.
This happens with multiple teams working on different parts of a monolith.
With microservices, we get to avoid everyone _ else_ being affected as much as they would have. This is Conway again. We can fix the teams. We can grow the teams (in maturity and capability). We can fix the organisational boundaries that interfere with communications. We can align priorities.
Al of the above needed to happen anyway with a monolith, but we used to have hundreds - sometimes thousands - of people being stuck or firefighting because there were some teams unable to collaborate on a feature.
Feature teams are a great answer to this general problem, but they are hard to make happen where there are huge pieces of tech that require esoteric skillsets and non-transferable skills (‘I only want to write C#’).
I’m seeing developers enjoying picking up new languages, tools, and concepts, and I’m seeing testers become developers and architects, and us actually getting some speed to market with exceptional quality.
This isn’t because of microservices. It’s because the organisation needed to be refactored.
Microservices aren’t what we need. They are slightly wrong in many ways, technically, but we now build with functions, topics, queues (FIFO where we need to avoid race conditions: not all distributed systems problems are hard to solve), step functions, block storage (with a querying layer!) - and other brilliant tools that we wouldn’t have been able to refactor towards if we hadn’t moved to microservices - or something else - first.
I’ve spent the past 12 years working on a service implemented as a bunch of microservices. At first, the Corporation started a project that required interfacing to an SS7 network, and not having the talent in-house, outsourced the development to write a service that just accepted requests via the SS7 network, and forward them to another component to handle the business logic. The computers running the SS7 network required not only specialized hardware, but proprietary software as well. Very expensive, but since the work this program did was quite small, hardware was minimized, compared to the hardware to run the business logic (and the outsourced team was eventually hired as full time employees).
A few years down the road, and now we need to support SIP. Since we already had a service interfacing with SS7, it was just easier to implement a service to interface with SIP and have it talk to the same backend that the SS7 service talked to.
Mind you, it’s the same team (the team I’m on) that is responsible for all three components. Benefits: changes to the business logic don’t require changes to the incoming interfaces (for the most part—we haven’t had to mess with the SS7 interface for several years now for example). Also, we don’t need to create two different versions of our business logic (one for SS7, which requires proprietary libraries, and one for SIP). It has worked out quite well for us. I think it also helps in that we have only one customer (one of the Oligarchic Cell Phone Companies) we have to support.
that depends on your perspective. I remain convinced that the primary function of microservices is to isolate the members of a laboring force such that they cannot form a strong union. That is, take Conway’s Law and reverse it; create a policy to specifically -introduce- separation between workers and they won’t have a reason to talk, which makes it less likely that they’ll unionize. In that framing, the primary function of microservices to prevent programmers from unionizing.
I chuckled.
Truly, people around me (as far as I can notice, including public figures covered by media) tend not to think about communicating effectively with others and instead tend to vilify them and otherwise avoid having the conversations necessary for further progress.
Perhaps it’s just a simple fact that most people are not trained in communication and IT people specifically have not had that much hands-on experience to compensate. Not that rest of the population were that much better at it (on average).
In short, I wouldn’t attribute the phenomena to malice. I think that IT people are not unionizing simply because that means talking to people, which is (on average) exhausting and hard.
Would be interesting to know then if microservices are less common in this country where more or less everyone is in a union already.
if increasing the number of developers meant you can’t ship new features and corners got cut then you need to reduce the number of developers. the shapes of organizations can change, they must. it’s important for us to fight for changing them for material reasons like reducing complexity and friction.
making the monolith modular is a good example of when we realize solving the real problem is very hard so we solve a different problem instead. problem is we didn’t have that second problem. and in fact maybe it makes our real problem (reality) harder to solve in the future.
Microservices serve to introduce artificial boundaries that you cannot cheat around, like you might in a monolith. And it’s the sum of those little “one-time” boundary breaks that make old monoliths unmaintainable. Moving the boundaries to network makes you do it right every time. No more of those little breaks that make the monoliths unmaintainable. But the boundary doesn’t have to be network, it just has to be unbreakable. While this could be achieved with strict code review, I would be interested in seeing something that could be automatically enforced in language level.
I beg to disagree. At work, we use microservices extensively, and you end up with the wierdest boundaries. For example, we created an entire service to store a boolean per user, where the only operations are “set” and “get”, and very little scope for change.
Or you have an example where you have a central kind of entity lives in one service, and the lifecycle for that same entity is managed by another.
Or there was the case where we had one core service, driving 1) creating a message in the message store, 2) creating a new file in the file service, 3) binding that message to the new file, 4) finalizing the file, then 5) telling another service to upload that batch file. All that core service cares about was “this message needs to go out sometime”, so that should have been the interface. But, you know, that’s easy to say in hindsight, and when you’re not under time pressure.
Unless you have folks who have learned in an environment where it’s cheap and easy to restructure your boundaries, and you give folks enough time to design things well, then there’s a greater risk that you ossify bad interfaces. The (reasonble) trendency to have disjoint datastores per service, without being easily able to migrate data between services only makes this worse in my experience.
I’d say “no true
Scotsmandeveloper does this!” but they do, oh god do they ever.I want to pick on this specific example which is very much lived experience for me:
DDD talks about bounded contexts, which group different parts of your business together. Sales, support, and reporting all care about “customers” – but they care about very different parts of that customer. If your microservice crosses bounded contexts, you’re in for hurt.
Inside that bounded context are aggregates. Inside the sales context you might talk about a customer’s orders – maybe you have a customer, one or more ordered items, and a shipment date. But that’s an aggregate that should accept business messages: “create an order”, “void an order”, “approve an order”, (…). The functionality and data should live together. If you have a microservice for each piece, the burning will never stop.
The problems I’ve seen with microservice design are almost always because - beating this drum again - developers select them for perceived technical reasons: I can use MongoDB here and it’s fast, I can use serverless and let it scale, whatever. And when you don’t do the proper business analysis around what you’re building, you end up building shiny little things that work great when you run
ab
against them in dev but fall flat on their face when - like your “get/set a boolean service” - introducing them means adding a network boundary for zero gain.What I meant is that it makes you perform the communication between the components without cheating every time. Not that those components make sense. Both monoliths and microservices do nothing to stop you dividing things in stupid ways, but microservices at least force you to divide them, instead of allowing you to access whatever and externalizing things that were meant to stay internal. Any interface design is still better than none, and in monoliths, you can pass by for a good while with zero time spent on designing the interfaces between the components.
Oh, and I don’t like microservices. I very much enjoy a small set of well engineered deci-services over a ton of microservices.
No. Nothing could be further from the truth. The wrong interface, once ossified, can kill a project or company.
The problem is that people end up still “cheating” their way through boundaries, mostly by simply throwing the initial design over board and having it take other roles by exceptions thereby complicating things.
It all sounds nice in theory and I used to be a huge proponent of microservices, but the reality in both small and big companies is that in cases where it works fine - good engineering, design, separation of concerns, etc. - the same thing could have been done in a monolith.
A lot of the benefits only hold as long as they would if you took the same care with monoliths, only that you usually end up having to take care of the additional complexity of dealing with microservices.
At least the disregard for initial design is explicit, in both ends, just by it being described as some sort of API. Often in a monolith you can’t even tell what was intended to be internal and what was not, and similarly, you can’t really tell if the usage of another component is done in an intended way or not.
And yes, monoliths, or coarsely separated services are better than microservices IMO.
The main benefit of microservices to me, is independent deployability. Which in turn can reduce the amount of intra or inter-team coordination required. (There are certainly other ways to do that, such as lots of tiny war files in Apache Tomcat, but that’s by the by right now).
The article kinda touches on this in terms of team autonomy, but when you have a team of folks working on different facets of the product, that align well with your service boundaries, then just not having to worry about contending deployments is huge.
Conversely to the backend microservices we had, for a long time our internal web frontend was effectively a modular-ish monolith. It worked okay, but again, it had it’s challenges. Deployment was often done in batches, and if you weren’t careful, you could end up with a week’s worth. And if one of those changes needed to be rolled back, quite often the week’s worth of changes would be rolled back with it. It’s less bad if you can fix forward, or create a backout pull request quickly, but that’s not always doable.
Never mind the waste, and unrealised value represented by those un-released changes.
I wonder if the software development community zeitgeist had just accepted the Perl, Php and Ruby world they were born into, a lot of pain could had been avoided. But, a lot of cool, fun stuff would have been missed out on. The dilemma of non-committee driven progress I suppose. I do write this as a person who somehow miraculously, rewrote a large, mission critical Perl codebase (really without the experience to qualify it) to a new language. Not that it matters much. But I don’t have a dog in the fight for Perl, php or Ruby is what I mean.