Microservices seem to be abused toward the same end as object-oriented programming in the 1990s and 2000s: to insure against individual programmer failure. If a service becomes a piece of shit, it’s disposable. The same thing was said about objects and classes in 1997: the promise was that you could hire commodity programmers and toss away the objects that were poorly-designed.
I think there are benefits and drawbacks to microservice architectures. It is better to be able to throw away part of a codebase than to have to tear the whole thing down. Modularity is good. That said, microservice architectures can be remarkably immodular, especially when you have immature people putting them together.
Moreover, I bristle at anything that seems like it could be used by The Business to convince itself that poor engineering and the mass hiring of open-plan commodity developers is OK. It is not OK, it does not work, and it never will, despite 40 years of efforts toward that direction.
This article would benefit greatly from an example of when making the unpopular decision was much better than the popular one, or vice versa.
Could anyone comment on that?
I’ll bite, though it’s the sort of thing that is basic walking the line between “unpopular but correct” and “stupid in hindsight”. That divide is probably why you won’t get a lot of replies until somebody breaks the ice.
A few examples from the last ten years:
Using custom C++ in a class on game development when everybody else was using C#, XNA, or some game framework. The downside of this was that debugging was a colossal pain in the neck, and our iteration time was kinda slow. We couldn’t show the professor anything for a month or two, but when things clicked, they clicked. The upside was that we had a deeply intimate understanding of the project, and some clever features (physics, particle systems, sound effects, massive numbers of mobs, etc.) were pretty much only easily possible on performance scale because we had such tight control over things. This was also before Unity got super popular (and it’s still kinda shit, it seems, for certain tasks).
Using vanilla Express for small projects, instead of $flavor of the week. This includes using in-memory objects to store information–basically feels like Mongo, but has better performance for the same resiliency guarantees :). No Sails, no HAPI, nothing too exciting. The bad part about this is that it doesn’t let me buzzword my resume, it doesn’t expose me to the crazy new trends in the JS ecosystem, and it doesn’t help me sit at the cool kids table at JS conferences. The good part is that it doesn’t expose me to the crazy new trends in the JS ecosystem, it makes it really straightforward to start a new project, and that it means that I have very little spooky action at a distance when sorting out my apps.
Using ES5 instead of ES20XX or whatever they call it these days. No stabby lambdas, no classes, no string interpolation (sadly), and so forth. The bad part is that if I suggest this to modern JS devs, they treat me like an anachronism or a masochist. The good part is that I can always test my shit really quickly in any semi-modern JS runtime environment, that I don’t need to bring in a whole clowncar of dependencies just to compile my code, and that my source looks utterly boring to anyone else inflicted with it (which is a feature, not a bug).
At a previous gig, using dedicated hardware instead of cloud hosting for data collection and analysis. The bad part was that we couldn’t pad our resumes with having deployed Amazon Lamdbashift on Riaksandra on a MesosMQ fabric, that we couldn’t trivially spin up/down new instances for testing (argh), and that we had to deal with stupid politics involved in provisioning systems. The good part was that we didn’t have much spare funding to do hosted stuff anyways, that the customers would only allow on-site hosting and doing that let us skip a lot of really terrible political battles, and that we could do reliable benchmarking of some things that wouldn’t be easily possible otherwise. Honestly, this is the one decision (not made by me) that I would reconsider.
At a previous gig, insisting on dropping support for all versions of IE < 10. The bad part of this was that, of course, we still had a couple snowflake customers we had to accommodate. The good part is that we simplified our development pipeline, that we were able to aggressively target some HTML5 features (Websockets, WebGL, web workers) that would’ve been a pain in the ass to shim, and that we were able to greatly ease our development burden.
Doing the unpopular thing isn’t always conservatism, though it can look like that: it’s doing the thing that fits your team and your project. And even then, you can still screw up.
Now seems like a good time to mention the hype cycle. Microservices are definitely up at the peak right now. I find it funny that even though we have the best advice in the business essentially saying “hey, maybe don’t do this at first, wait until you can split apart a larger app” people are still wanting to try to start out with microservices.