1. 45
  1. 17

    Another disadvantage of services is that you can’t do a database transaction that includes operations from more than one service.

    Unrelated, I worked at a place a few years ago that built microservices because they wanted the various services to work concurrently, and they were using Ruby. Unfortunately, trying to boot all the services, the right order, ensuring a service didn’t come up unless one it depended on came up, was tough. It also forced me to switch from a spinning disk to a solid state one simply because of the number of files being loaded at once.

    That experience was part of what motivated me to learn Elixir. Years later, I recently worked on an Elixir umbrella app. It was like microservices, but running the whole thing was easy. In production, they deployed one app to one server, one to another, and let both of them depend on a third app, which was deployed to both. All the apps that needed a database shared the same one.

    1. 24

      Unfortunately, trying to boot all the services, the right order, ensuring a service didn’t come up unless one it depended on came up, was tough.

      Don’t do that. You need to deal with network issues anyways, so bring them all up and let your error handling take care of it.

      1. 8

        trying to boot all the services, the right order, ensuring a service didn’t come up unless one it depended on came up, was tough.

        I don’t know you usecase, but that seems to be a complex thing to implement that probably shouldn’t be needed.

        1. 2

          systemd has service dependencies and sd_notify built-in. Few people use it but it’s totally possible to start a web of services. Of course this doesn’t help with off-machine resources.


        2. 6

          Another disadvantage of services is that you can’t do a database transaction that includes operations from more than one service.

          You absolutely can; I do all the time. It may require fundamentally rethinking what we mean by “database” though.

          Most people think a database is somewhere between “a black box that stores data” and “a black box of hard algorithms for data query” or something like that. To them, they conveniently ignore that the “database” is a service outside of their application, and “can’t transact” across that service boundary either. That’s part of why database migrations and schema changes are so hard: Version 2 and Version 3 of your application are potentially separate services – even in the monolith – and because you “can’t transact” between versions of your application (what does that even mean!?) you’re forced to put logic into your application to deal with both the old schema and the new schema, and you may even often have to do on-the-fly upgrades.

          Or you just have downtime.

          Or you have a whole separate system, and your “transaction switch” is on some kind of network load balancer. Blue/green or whatever you want to call it.

          However there’s a very different way: If your application is the database (and this is easier than you probably think), your crud operations simply need to log their intentions, and then have something read the log which materialises the results into views you can use.

          One way to do this is to go heavy on the stored procedures. I like this approach, but SQL is a really terrible application language, and many programmers are very bad at SQL. Many databases don’t have an audit table for the stored procedures – there isn’t very good version control for them, so that’s another reason people don’t like it. Maybe the tooling could be improved though.

          Most people usually go the other way.

          In Erlang/Elixir, I may (for persistence) use a disk_log to write out arbitrary terms, and have a subscriber pick them up – that is also a gen_server that you can query. At startup, I can read the logs. If the logs are big, my gen_server knows enough to checkpoint – just write out State to another file, and include the offset in the disk log.

          In Go and Smalltalk you can do something similar.

          In C, or lots of other languages that don’t have processes, you can still get this functionality with a little thought, because the operating system has processes and fifos/socketpairs for you to use! It feels very similar to writing microservices, which sucks for different reasons (notably the lack of good IPC, e.g. php has serialize), but it’s not the same as microservices: just mutually cooperating processes. Qmail is a great example of this architecture, that is probably just a bit more complex than it would need to be today since the whole world is Linux and iOS now.

          In q I don’t even need to do that. I can just use the -l and -r options which give me logging/subscription built-in to the runtime. It’s also much more enjoyable because the language doesn’t suck as bad as SQL, and we have great IPC.

          Putting your data responsibilities in your application isn’t popular though. People actually think redis or Postgresql are fast, and many programmers doubt themselves able to make something as fast (let alone faster). This kind of approach however, tends to be done well around 1000x faster than using “a database” (and maybe done poorly or naïvely only 10x faster), gets you 100% uptime even in the face of schema changes, all of the benefits of a distributed/multiservice application with none of the downsides.

          This is, as I see it, a serious barrier: Programmers lack the confidence to build things outside of their specialisation (whether they call it “back end” or “front end”) and even terms like “full stack” seem to be (in their normal usage of the term) limited to “code” – very few “full stack” developers seriously consider rolling a new database for every application. And I think they should.

          1. 1

            What is q ?

            1. 1
          2. 3

            Another disadvantage of services is that you can’t do a database transaction that includes operations from more than one service.

            You can using distributed transactions, but that’s a whole other nightmare to contend with.

            1. 2

              Unfortunately, trying to boot all the services, the right order, ensuring a service didn’t come up unless one it depended on came up, was tough.

              Topological sort?

            2. 6

              We’ve just moved from a semi-micro-services architecture back to a monolith. Primarily we did it because we now need (for reasons I won’t get into) all database changes for a single business operation to be within a single transaction, and this was the easiest way.

              But, there have been a ton of side benefits so far:

              • all the “services” are versioned together (previously each was in its own repo)
              • less stuff to set up locally and to deploy with CI/CD
              • much easier to debug, refactor or add new cross-service functionality
              • much less boilerplate code around calling services
              • enforcement of no circular dependencies between services (a couple had snuck in)

              It is going to take more care to keep the logical services from getting polluted, but otherwise it has been mostly a positive move.

              1. 4

                I don’t have a really strong preference for either of these, and I think it depends on the team. Setting them up can be confusing for people new to it, but it gets easier as you do it (like anything else).

                Microservices aren’t going to magically solve all of your issues. You still need well thought out plans and the horses to do it.

                1. 4

                  My experience is that immutability as the default affords many of the same benefits as microservices without the additional complexity of service orchestration. Key motivation for having microservices is to reduce coupling by enforcing isolation. The most reliable way to enforce isolation in imperative languages is to split the application into separate processes communicating over service boundaries. However, if your data is immutable you already get isolation for free because you’re passing data around by value, so there’s no need to introduce architectural complexity to enforce low coupling between the components.

                  1. 4

                    All microservices are is a rebranding of Object Orientation, which is based around objects sending messages, not unlike, say, the Internet.

                    If you don’t have a program that benefits from entirely independent pieces that communicate or can’t be well decomposed from a monolith into such, it doesn’t make good sense to use the model.

                    Part of this is probably underestimating just how fast a modern machine is; perhaps it’s that people are mistakenly more inclined to use ’‘microservices’’ than rewrite part of the program more efficiently. I’d be very skeptical of moving a program off a single machine, considering you could optimize it for quite a while before it actually becomes reasonable to do so, unless of course, again, you have a problem that very clearly benefits from the Object Orientation model.

                    1. 2

                      All microservices are is a rebranding of Object Orientation

                      Sadly you do not scale or write into different languages your different classes…

                      To be clear, I’m more in favor of having a monolith that is so boring to operate that it’s easy to extract few parts that make sense to extract. For example, Gitlab is a giant Rails app, and they decided to extract everything Git related (now a golang service called gitaly iirc), this does makes a lot of sense to me.

                    2. 2

                      “It’s hard for juniors” is a strange angle in my opinion. There are a lot of things that are difficult at that level. That doesn’t mean we should throw them out.

                      Some projects reap the benefits from a microservice architecture. Some don’t. We should all be able to agree that (hopefully) no one is forcing us to use microservices. Use what makes sense. Being sensational about the “lets all use microservices” sensationalism gets us nowhere.

                      1. 2

                        Let’s face it. The reason people aren’t bragging about it any more is because we’ve been doing it for 8 years and people are realizing the benefits. Not because nobody wants it any more.

                        Thankfully, too.

                        1. 1

                          For me this discussion is nonsensical. Every micro service has a tiny monolith inside.

                          Why not have the best of both worlds?

                          1. 1

                            That highly depends on your definition of monolith, though.

                            Sure, in one sense even a tiny webapp with one endpoint of /foo is a monolith, because it does nothing else. But if the implementation is 10 LOC then basically nothing in the usual usage of monolith holds. You could call this reductio ad absurdum if you use /hello_world => print "Hello world!" as the /foo - but microservices are actually supposed to be built to be very small, so they should(!) inhibit none of the downsides of a classic monolith.

                          2. 1

                            Work in gov’t you’ll find plenty.

                            1. 1

                              the best argument of microservices for most companies is that it allows teams to work independently and reframe collaboration as meeting interfaces: a technical solution to a social problem.

                              I acknowledge there are issues with coupling failure domains into a single process; fair enough. That’s usually not discussed or focused on.

                              I favor monorepo/monolith designs; when a failure domain or specific cohesion/purpose demands it, part out the code into a separate service, but don’t do the work until then. Microservices generate enormous overhead in other technological support requirements.