1. 47
  1.  

  2. 18

    Microservices do have technical benefits. For example, if you are using a library which is the best tool for a given job, but which has not been updated to work with later versions of a language or operating system stack, you can isolate that as a back-end component protected from the outside world but still serving its purpose and independently scalable, while moving your core business logic to more updated versions of the language/OS stack.

    This has helped me with multiple problems in the past, and there are other places where the pattern could have been applied to good effect but which instead ended up with an entire monolith mired in an ancient, end-of-life stack because of a single dependency.

    1. 1

      I did something similar when I needed a symbolic computation library in a Rails app. I ended up writing a service in Python + sympy and passing commands and results back and forth between Rails and Python. This approach differs in two important ways from the current microservices hype:

      1. It starts with a problem (e.g. no Ruby counterpart to sympy or abandoned dependency) to which extracting a service might be a solution.
      2. It doesn’t change the architecture of the remaining app. If it’s a monolith then we’ll end up with a monolith + service.
    2. 12

      If this is the scalability method you plan to use, why not just deploy more copies of your monolith behind a load balancer?

      I’ll never understand why this isn’t the first option for scaling for most organizations. Am I just old fashioned?

      1. [Comment removed by author]

        1. 8

          Why are your features so numerous or expensive that the cost of having an idle feature in a horizontally-scaled monolith outweighs the cost of managing microservices? Idle features should be nearly free. There’s a slight memory cost to having the code loaded, but that should be negligible, and there should be no other meaningful costs.

          1. 3

            Out of curiosity, what does your software do? Like, what are all these features?

            1. 7

              Here’s an example from my workplace.

              We have a mostly-monolithic application (running several instances behind LB) that we’ve been slowly decomposing into microservices. One of those microservices handles PDF generation, but sees very bursty usage patterns. Both time of day and time of year usage varies dramatically compared to most of the monolith.

              Because it’s a separate service, the PDF generator can scale up and down based on actual usage of that feature and it won’t bog down any of the other application features.

              1. 4

                But this would be true of a monolith as well; if every process in your backend could satisfy any request, you would just start more of those processes to handle increased load for any API endpoint. “Put more monoliths behind your load balancer” is still a solution to this, unless I’m missing something.

                1. 7

                  Not necessarily, scaling a monolithic application is likely to consume more resources (memory, sockets) and take longer to initialize. With microservices you can scale exactly what you need, which saves you operation costs. The benefits are offset by the cost of managing the microservices, which is where a monolithic application shines because there’s just one type of application/service to deal with.

                  1. 4

                    With microservices you can scale exactly what you need, which saves you operation costs.

                    With microservices, you’re likely sending 2 or 3 requests to backend systems and throwing the slower one away to reduce latency. You’re parsing and serializing messages. You’re sending more network traffic between nodes. And you’re dealing with expensive things like distributed lock management. I’d expect operations costs to go up with a switch to microservices.

                    There are times when a whole application stops fitting onto a reasonably sized box, or where you need to distribute things for reliability reasons, but if you can fit everything onto one node in one process, your resource usage would probably drop.

                    Separate (micro-)services have their use, but take care.

                2. 4

                  Whoa, wait a minute- in what universe is “generating a PDF” a microservice? That’s just a service. I’ve built loads of applications that have services dedicated to a specific feature, often in cases where the feature itself is stateless- like PDF generation.

                  Apparently, I was doing microservices back in the early 00s.

                  1. 3

                    Apparently, I was doing microservices back in the early 00s.

                    Sure. I don’t see anyone claiming otherwise. All the microservice gurus I’ve heard acknowledge that microservices existed long before the name.

                    The newish[1] thing isn’t microservices as such, but a “microservice architecture” where everything is a microservice.

                    That’s just a service.

                    A small service. Perhaps even micro? :-)

                    [1] Yes, even that’s not entirely new. “SOA done right,” etc.

                    1. 2

                      I consider a PDF generator pretty “macro”, personally. It’s not a small feature in any reasonable sense of the word “small”.

                  2. 2

                    Interestingly enough, these arguments in favor of micro services are advertising the same benefits as microkernels like QNX. Easy to upgrade despite version issues (hide in VM w/ IPC). Limit failures to one node. Selectively scale individual components based on usage (eg single CPU, SMP, AMP). Too bad most companies or projects don’t just go the rest of the way to use or extend microkernel OS’s to have those benefits everywhere. :)

                    Note: The containers, clouds, etc can approximate them pretty well just with more complexity and possibly less reliability.

                    Edit: Also, isolating a PDF reader was an early proposal of mine for separation kernels like Nizza Security Architecture. It would be a combo of secrets on microkernel, trusted GUI (Nitpicker) that leveraged, app-specific virtual screens, PDF reader in isolated process on virtual screen, and most GUI functions in Linux VM. Hacks will be limited to passing BS through virtual screen or IPC which are handled by tiny, simple components.

              2. 3

                It usually is the first option. However, if their monolith is anything like ours was by the end at $dayjob-1, startup would take minutes. So new capacity was constrained by startup speed of the app. Tests likewise took as long.

                1. 5

                  So you wait minutes. Get a coffee.

                  there are times you want to split services up – I have worked with systems that took 24 hours to come up to speed and collect enough data before they were ready to take traffic. Those were best isolated from the rest of the system.

                  Other candidates include giant lookup tables. You can have just 2 instances (for redundancy) and not use the hundreds of gigs of RAM in each process that queries them.

                  But a couple of minutes to start? Meh, I would gladly take that over the utter hell that making a large distributed system respond reliably with consistent low latency and low resource use.

                  1. 4

                    By my reading, Xorlev was talking about taking minutes to add new capacity to production.

                    If you have a single feature with bursty demand, that could cause a full service outage in a monolithic app.

                    Putting that feature on its own hardware would help in that case (although you can do that without switching to microservices - by running the monolith on some servers which are reserved for handling that bursty feature.

                    1. 5

                      In that case, I’d be curious to hear what kind of service is so bursty that minutes to spin things up is unacceptable, with no hot spares or slack capacity at all in the existing instances that can soak it up. And, with few enough dependencies that bringing spare capacity on those doesn’t also take a significant amount of time.

                      1. 2

                        When we were on the monolith we were an early startup selling an API with very variable traffic and at the time the request cost wasn’t uniform, making it fairly difficult to forecast load. Now, we eventually solved that by running multiple clusters of the monolith. Then eventually decomposed the monolith.

                        Most of the things we broke out I was really happy about. They didn’t have deep request chains, most were two levels at most (service -> auth + service -> db). Microservices were fairly messy for other products we had and it took a while before we built up the tooling to really be happy with it.

                        I was also happy with how fast local tests ran. Services usually ran their suites in 10s of seconds even with functional tests enabled. Led to a faster development cadence, but you always had to be careful to find dependents and run their tests too.

                    2. 1

                      Alternatively, have the thing start coming up when the developers are first walking into the building or on the ride there. It can be programmed to do it at a specific time or a signed message from their phone.

                  2. 4

                    People in our industry don’t understand what is needed to scale, and have no intuitions for how much resources a user doing normal work will consume.

                    1. 1

                      it is the first option; most places start exploring other options after that one breaks horribly.

                    2. 10

                      I see that most of the discussion is around technical arguments for or against microservices. I found that to be false as very few companies actually benefit from the kind of optimizations you can have using microservices.

                      Personally, I’d think that this quote is quite interesting:

                      In reality, they need to address the people-related problems via more effective communication. When a startup has multiple dev teams, it is a requirement that they stay coordinated and informed about everyone’s work.

                      In my experience working with various teams, remote or local, the communication is the biggest issue by far and having multiple dev teams working on the same codebase yet still being aware of what other teams are doing is, honestly, something very hard to achieve in practice. That’s my biggest gripe with monoliths, not the technical or financial aspects.

                      1. 1

                        Could you explain how microservices address this problem? Why not just split the monolith into packages with stable APIs other teams can depend on?

                        1. 1

                          Of course you could do that but I don’t think that many companies will dedicate human resources to a package or library, while they would do that for a microservice. I believe that’s because a microservice still feels more complex than a package. So instead of a team of 10+ developers working on everything, you’ll have smaller teams working on smaller codebases and smaller teams have better communication.

                          Also, with microservices you have hard boundaries, starting from the way the source code is stored (probably a separate repository) to the way things are deployed. Packages will have at best decent documentation, but with microservices you’re forced to describe how others should interact with it. You also get some interesting side-effects: it’s a lot easier to quickly assign bugs to the correct person because you now have different domain areas serviced by different teams. Login doesn’t work? Probably it’s the authentication service. Image uploading fails after the image service was deployed? Hmmm, wander what could be the cause…

                          I honestly believe that microservices are a lot more useful for scaling teams by splitting them into smaller projects than scaling code, especially when you have teams distributed across the world.

                          1. 1

                            Of course you could do that but I don’t think that many companies will dedicate human resources to a package or library, while they would do that for a microservice. I believe that’s because a microservice still feels more complex than a package.

                            That might be the case and should be addressed at the leadership level.

                            So instead of a team of 10+ developers working on everything, you’ll have smaller teams working on smaller codebases and smaller teams have better communication.

                            That’s not what I had in mind. I thought about a single repo with the a structure like:

                            root/
                              app/
                              lib/
                                payments/
                                crm/
                              test/
                            

                            For example, payments could expose PaymentService class publicly which would be a facade to the whole library. This would be the only class you could import from other libraries or the top-level app. You don’t need to introduce a distributed system to achieve this kind of separation.

                            Also, with microservices you have hard boundaries, starting from the way the source code is stored (probably a separate repository) to the way things are deployed. Packages will have at best decent documentation, but with microservices you’re forced to describe how others should interact with it. You also get some interesting side-effects: it’s a lot easier to quickly assign bugs to the correct person because you now have different domain areas serviced by different teams. Login doesn’t work? Probably it’s the authentication service. Image uploading fails after the image service was deployed? Hmmm, wander what could be the cause …

                            I think all these benefits could be achieved using the approach I outlined above without the extra complexity of separate repos, operating multiple services, more difficult end-to-end testing, network-related failure modes and a host of other problems.

                            That being said I think certain languages make it more difficult to enforce this kind of separation (although it’s still possible). For example, in Ruby on Rails projects dependencies aren’t required explicitly and classes are autoloaded. You can reference any class from any place in the code.

                      2. 5

                        Article was eye opening for me. I was one of those who thought that micro service architecture are necessary for scaling without considering its use cases and pitfalls.

                        Eye opening. Huge Thanks.

                        1. 4

                          Best practices are context dependent.

                          I’d rephrase that as “best practices are workload and organization dependent”. Even within my team we have some applications that are best structured as a monolith and some that are best structured as microservices because of workload. Our “monolith” is a pretty smart http proxy focused on few moving parts with a deployment model where we plan on using the resources of the entire machine and minimizing ipc overhead is important.

                          Our “microservices” are all nsq consumers, because spinning up new consumers to read/process/write data is a good fit.

                          horizontal scalability

                          A monolithic CRUD app built on Cassandra is perfectly horizontally scalable.

                          1. 4

                            All the talk of scaling on this thread and no mention of scaling developers. What happens when you go from 1 to 10 to 100 to 1000 to N developers? Admittedly not a problem everyone needs to solve, but an axis of scaling people aren’t talking about.

                            1. 3

                              Am I the only one maintaining a stack where the services are nice and stateless and their points of contact with each other are so minimal that trying to put things under the same roof would only make things harder? What does it even mean to have a monolith when you have a website written in PHP and other components that need to be in very-very-not-PHP? Are we ignoring the (very common in my experience) case where you’re running multiple products, that started as different codebases from different teams (and likely different companies), but it’s essential that they share certain services? You can either make a monolith of it, which is tantamount to a complete rewrite of at least one codebase, or you can split out the components that need to be shared, and call them services.

                              1. 1

                                It’s not explicit but I think the article is around the immediate jump to microservices. Your use case is a good example of the cost of going monolith versus the cost of going separate services. You already have separate services either by the current market ecosystem or by language choice and that’s fine. However, there are many teams at my company that choose to spin 6 different services that are all written in the same language and written from scratch. It ends up being a massive headache because none of the services have documented APIs and all the code lives in different repos that have different permissions. To me, there was no real good reason to choose microservices in this case and I am paying the cost of it every time I have to work with their services.

                              2. 2

                                This is a really solid, well thought out article with a cogent argument that doesn’t resort to splashy click bait trolling to get the job done. Kudos to the author.

                                I’ve worked with both microservices and monoliths and seen them both be used well and poorly, I think how you abstract your app is a decision just like any other with trade offs in either direction. IMO the author has a definite opinion but clearly has the experience to back it up.

                                More like this please :)

                                1. 1

                                  I’d love to run a monolith if I could, but I fortunetly I need to grant different permissions to different components (some are encryptors and others are decryptors).

                                  I still develop the system as a single deplorable artifact. Has made my life somewhat simpler.

                                  1. 4

                                    a single deplorable artifact

                                    I’d suggest a spelling change but TBH I like this one better.

                                    1. 1

                                      Hah. I’ll let it stand.