To clarify, I don’t necessarily agree with the author wholesale, I just wanted to hear people’s thoughts on the topic.
I disagree fundamentally. What about things like client-side loadbalancing, or stats, or an admin service? Why reinvent the wheel constantly? Right now I’m imagining the horrifying amount of effort it must take when a new 0-day comes out, and you need to fix it on each of your totally artisanal microservices in totally different ways.
There is also a larger tradeoff which this is promoting inadvertently, which is actually probably a good tradeoff in general, which is to not allow local state to affect global state. Basically, if your service turns on a switch, that shouldn’t let it turn on the switch on everyone it talks to. This can be a very powerful behavior, but also very dangerous, and it’s easy to shoot yourself in the foot with it. One example of where it can be useful is with something like tracing, when it’s restricted just to a request. By enabling a distributed trace at the top of your request, you can continue tracing through your entire architecture and see the life of a request, which is very neat, but would also be very difficult to do if you’re insisting that each of your services must be handcrafted.
I believe in these cases instead of reinventing the wheel you would extract it and create another microservice or library. This way, functionality can still be shared between the microservices, but the code would remain independent and isolated. However, the article is still pretty naïve and poorly written.
I agree with you. When I think of how to solve these kinds of problems, sharing code, often via libraries, is almost always the right solution. Sharing libraries seems to be one of the things the article is asking you not to do though:
“Leverage existing technical functionality, e.g. through shared libraries.” is listed as a way of “sharing code”, and the article complains that sharing code “will attach your services together via the shared code”.
My view: if you want to share code between two independent projects in a library, then that library should become a project in its own right, with a release process, semantic versioning, its own test suite and all the rest of it.
I think this is very poorly written. Coming out of it, it’s unclear to me when I should share code and when I shouldn’t. This isn’t the first time I’ve heard this claim. IMO, the sentiment has value but the formulation is completely wrong. This article hints at it but doesn’t really drive the point home.
Not sharing code between microservices is a consequence of the design, not a goal in-and-of itself. Your systems should have very little to share. Sharing code between microservices is a smell. So one shouldn’t use this as an excuse for copying and pasting code.
That being said, I find articles like these odd. My team and I develop code in such a way where this question would simply never come up. We have very generic components that implement clear and concise semantics, taking callbacks for any policy decisions. An application then has one or two units (modules in our case) that tie all of these together in a coherent system. So we have very heavy code reuse because reusing a component is so easy. You either need those semantics or you don’t. And very few things have pretty-close-but-not-quite semantics in our system.
I think we are lucky in that our tech lead was a semanticist. I’ve become a believer that every team needs a semanticist. The ones I’ve talked to/worked with have the ability to see problems in a very crisp way, boiling it down to its essentials.
I think this is right but for the opposite reason: if two components are closely related enough that they can share code, then they don’t belong as separate microservices. Put them in a monolith.
if two components are closely related enough that they can share code, then they don’t belong as separate microservices
Yeah, I don’t see this at all. There are plenty of components in real systems that are incredibly similar that should be separated out and made modular to prevent wheel reinvention. This is why software libraries exist. I feel like this perspective comes from people who are used to platforms that do damned near everything for them, such that their “microservice” is a few shoestrings of code to bolt one library to another - environments like node.js and django where you can import what amounts to your entire service and then call .runService(). What I’ve noticed in these systems is that they tend to become tarballs of hidden dependencies due to one library optionally supporting some functionality from the next, making them too tightly coupled to then separate out from one another.
I feel like the whole article/argument should be that tight coupling is an understandably easy trap to fall into with large systems, and should be avoided at any cost when dealing with microservices. That doesn’t mean “don’t share code”, that means make sure that the code you’re sharing is only exactly the code you need to share to make both services work optimally.
I tend to agree with you. I think the author is just on the far side of the arc with reacting to massive convoluted systems.
Having worked on a terribly interconnected monolith, I yearn for a micro services approach if only to simplify our organisations understanding of the Service as a whole.
Taken literally, this means two services that communicate using json must use two different json libraries. If you have more than a few services, you will quickly run out of json libraries to choose from!
That’s fine, just reimplement your own json parser ;) Of course, you can’t use the same parser lib twice either …
That also means we can’t use the same compilers!
Code reuse is not a sin. Libraries are fantastic. Life would be pretty shitty if I had to build my microservices without shared encryption, compression, and checksumming libraries or if I duplicated schema definitions all over.
However if the author is talking about shared business logic I tend to agree. In an ideal world business logic doesn’t really end up needing to be repeated in multiple services. I’d expect shared libraries to be used for things like models/schemas and message passing in many microservices.
I think the recent microservices trend is pretty interesting. You ought to be able to write code and just swap out any bit of it in production; this is easy when the bits are different processes. And that’s the argument for changing how we do architecture.
But isn’t this something that Erlang solved a long time ago? I wonder if microservices are like a solution for languages that don’t have hot swapping of code. I don’t know, it just seems like if you were to design things from the ground up, microservices wouldn’t be the end state.
[Comment removed by author]
You also seem to get horizontal web scaling for almost free. The services are presumably stateless, so you can spin up as many as you need, and you need only scale up the parts that need it, not a monolithic app.
For sure. Perhaps it’s best to isolate the larger, less frequently used components.
In concrete terms, my blog has four luajit workers, about 7MB each. There’s also a single python service for code highlighting (pygments). It’s 20MB. I could have done something diabolical and embedded Python in each worker process, but that would have exploded the total size. 100 whole MB!
Clearly I’m working at a different scale, and my motivation wasn’t really memory saving, but if I needed to scale up ridiculously, I’d definitely by adding pygments processes at a slower rate than other processes.
I’m not really a micro service adherent. Just stumbled into some of its benefits before it was even a thing.