1. 42
  1.  

  2. 7

    The thing that struck me most about this is that (at least as described), they didn’t try to build a microservice architecture. They assessed the problem with their development cycle time, and eventually ended up with microservices as part of the solution. I feel like that sort of analysis is often skipped in favor of “Microservice All the Things!”

    1. 3

      I think “microservices” might also be overstating the extent to which they split things. One or two deployables per team is, I think, the right way to do it - which means when there are <10 developers, only one or two deployables. Where I’ve seen microservices go wrong is when there were more deployables than developers.

    2. 6

      That was a great read. Thanks!

      In general, I don’t know what to think about microservices. I support the goal of modularisation but splitting an application into separately operated components seems like a burden. Would it be possible to achieve the goal with, for instance, facades that would form an internal API?

      1. 7

        There are a few benefits of microservices (or services in general) that wouldn’t be satisfied by that solution.

        Independent deployability: You need to patch a minor bug in the PDF processing system? We have to deploy 10 million lines of unrelated code.

        Independent scalability: We’re expecting heavier PDF processing than normal? Have to scale up every other subsystem, too.

        Stronger subsystem decoupling: Hmm. This Shipping status data isn’t really a concern for the Customer module, but it would be handy in a couple of cases. I’ll just add it there for now. (Soon the codebase is full of convenient at-the-time hacks.) [1]

        Failure isolation: The PDF processing subsystem is overloaded, so users can’t even login to our application.

        [1] I recognize that this is a cultural problem, but making bad practices more expensive seems like a decent way to solve for it.

        1. 1

          Thanks, that’s really interesting. Let me braindump my thoughts. I’d like to learn in what circumstances microservices are appropriate architecture style. For me this is a great learning opportunity and what I write below is thinking out loud.

          Independent deployability

          ID is not a benefit per se. It’s a cost. Consider two versions of the same system. The only difference between them is that one supports ID and the other doesn’t (every other aspect is identical, including deploy times). Is there any benefit of supporting ID if deploy(:all) and deploy(:part_a, :part_b, ...) are indistinguishable? No, since supporting ID comes with a cost and if deploy(:all) works equally well there’s no reason to pay this cost.

          The benefit in this case is faster or easier deployment. If your deployment process isn’t satisfactory for some reason microservices may be the answer. However, in order to fix the problem, the team should consider alternative solutions. It may be some form of caching build artefacts, parallelising the build, etc.

          Stronger subsystem decoupling

          Absolutely agree. I’m wondering whether there’s a cheaper way of doing that. Microservices introduce operational, integration and development overhead. Microservices can be replicated in-process by facades serving as public APIs. Can we split the system into components, implement a facade for each of them and enforce (ideally automatically) a requirement that the only legitimate way of communicating with another component is through the facade?

          Independent scalability

          Heavy processing is usually done in the background by a worker so it is possible to scale it independently. (I realise that the web app + worker are two services but not microservices). Additionally, n instance of a monolithic app or background worker receiving a particular type of requests (e.g. PDF processing) de facto becomes a specialised service for handling these tasks. The fact that it contains code for handling other types of tasks is irrelevant as long as it’s presence doesn’t degrade the service (e.g. higher memory consumption caused by the unused dependencies).

          I do recognise that this can be a valid reason for microservices. I’m not sure in what circumstances it applies.

          Failure isolation

          The aforementioned PDF processing would probably by handled by a background worker. I’m having trouble imagining anything handled during the request-response cycle that would require separate scaling (see my comment about de facto specialised services above). Could we make the example more specific?

          Thanks for the comment @joshuacc! Thinking about all that was a great exercise.

          1. 2

            Those are some good points. I’ll only respond to the ones that I think need a bit more fleshing out.

            The benefit in this case is faster or easier deployment. If your deployment process isn’t satisfactory for some reason microservices may be the answer. However, in order to fix the problem, the team should consider alternative solutions. It may be some form of caching build artefacts, parallelising the build, etc.

            There is one other aspect that I think you are missing, which is the actual independence. If everything must be deployed together in a “big bang,” problems in subsystem A may (and often do) prevent deployment of improvements to subsystem B. The larger and more complex the system, the more likely this is.

            Microservices aren’t the only solution to this problem. You might, for example practice continuous deployment with the result that each deployment only includes the changes for a single feature. Nevertheless, this is a benefit of microservices as well.

            Failure isolation The aforementioned PDF processing would probably by handled by a background worker.

            One real world example of failure isolation: Netflix’s personalized recommendations engine is a separate service. In the event that it is down or overloaded (as detected by Hystrix) the front-end server that clients connect to can either fallback to cached recommendation data or substitute “popular on Netflix” recommendations instead before passing the data along to the client.

            If you’re interested in the real world benefits I highly suggest looking up some of Adrian Cockroft’s talks on the Netflix architecture, as well as Jez Humble and Sam Newman’s microservice/continuous delivery talks.

        2. 2

          Multiple services allow you to use multiple languages. I know the cool thing is to just use your favorite language and rewrite all the wheels in it, but some languages have killer libraries that you want to use.

          And so, while I’ll write everything I can in lua, I also use a couple Python processes to access existing libraries.

          In theory, I could drive Python from within Lua, but in practice it’s terrible. Marshaling things up over a socket is actually much easier than trying to get even a string into the other interpreter and call a function on it.

          1. 1

            Absolutely agree. I do this in one of my projects - Ruby on Rails + a Python service exposing SymPy + matplotlib. However, I have the impression that many teams adopting microservices are not in a situation like that.

        3. 4

          Impressive piece and provides a lot of insight on how to apply lean processes to software development.

          In some ways this is a debate the corporate IT world had for ages, and has been subjected to the fad of the moment: Do we keep a single huge application platform for all the company (SAP or a Mainframe) or we create a (smaller) system per function/department.

          Unfortunately, there’s no good answer as the decision is not a technical one, but related to the processes used to develop and manage the platform.