I came across this article as an interesting read. However, having worked at Uber, where we were super early adopters of microservices, getting to over 1,000 microservices which brought a lot of unexpected pain points, I do feel to add the downsides of microservices - and why you don’t really hear engineers from Uber boasting about how great thousands of microservices are.
First, it’s testing, specifically, the difficulty of integration testing that results in outages. When you have microservices that depend on each other and are deployed independently, one of the most common causes of outages will be ServiceA deployed, then ServiceB - unaware of the latest change in ServiceA - deployed and boom, a problem that an integration test could have caught. Ok, so how do we write that test? Well, we now either need to have the same codebase or stop any deploys from going out without checking out the latest code for the services and running the tests they depend on. Ok, so that’s not really autonomous deployment… and try solving this problem for dozens of dependent services.
Second, it’s library versions and conventions. When you start with 2 or 3 microservices that used to be the same monolith, you probably have the same version of libraries and use the same conventions. Fast forward to 15 microservices and a vulnerability discovered in an old version of a dependency. Chances are, the versions of third-party libraries will be all over the place as each microservice will update at different times - making some of them a security vulnerability. And the conventions on what style to follow or what linting rules to have will also drift apart.
Third, it’s about (build) tooling. With a monolith, there is the same linting, static analysis, test coverage requirements in-place. With microservices, unless there’s some team helping with tooling, it will likely be pretty adhoc: some services having a high quality bar, others not really.
Finally, ownership and repsonding to incidents. When it’s easy to create microservices, it’s tempting to do so. But people often underestimate the maintenance need of these - or just ignore it, if it’s too much. Over time, this might mean to zombie services - ones that are either not maintained/monitored actively or ones that are deployed, but have little to no use. Developers of small services might move on and leave these behind until someone else stumbles across them.
All the above being said, we still use microservices extensively… except we’re conscious of (not) creating overly small and simple ones, as well as realize that investing in tooling to solve the testing and library versions/convetions pain points is a must.
We’re starting to break up our monolith and are definitely worried about the pain points you mention.
Do you have an opinion about Pact to try to deal with some of the integration testing issues?
We’re also planning to not go “micro” with our services. Our current plan is for roughly 1 service for every 2 engineers, but half of those services won’t even change very often (think stuff like feature flags). Hopefully the relatively small number of services will make our lives more manageable as well.
I really don’t understand this argument. So, what stops you from writing an extremely simple library that pretends function calls are requests to a microservice? What stops you from organising your codebase so that each team is responsible for a library or just a collection of files?
I have been thinking a bit about this recently, and in the end, I think the difference is human laziness. When deploying as a monolith it is noticeably easier to “cheat” and cross the library boundary in a way that increases coupling slightly. (I suspect this is because you don’t have to worry about synchronizing deployment and making this new version dependence explicit.) With a service oriented architecture, the independence of components is slightly more marked, making accidental coupling harder to sneak in.
However, finding developers that are disciplined enough to treat component boundaries with the same reverence as service boundaries turns out to be hard. I currently work with the most skilled team I have ever met, and we all still get tempted to couple components from time to time.
Thanks for this comment; I suspected this is one half of the answer, and I think the other answer is introducing a service boundary forces you to turn your actions into data. So you want something done, instead of scattering the business logic and execution over a number of classes, you’re forced to turn that intention into a self contained piece of data that describes it, and then you pass it through a medium that logs it. All of this is great for software architecture and debugging/testing.
But again, you don’t need microservices to force you for any of this. I’ll go as far as to claim that microservices are a political tool to bring some functional programming sanity to an OOP environment. So, my response is, give into your intuition about what’s good software architecture and go full functional, without crippling yourself with microservices (unless you’re google-scale and actually need all the real benefits of microservices that you actually need when your project has hundreds of developers).