1. 14
  1.  

  2. 7

    A phenomenon I’ve noticed and cannot account for is that people who are not experienced with or who even reject the notion of object-oriented design are nonetheless excited by the idea of microservices, with their independent, encapsulated state, discrete responsibilities, communication by message sending and so on.

    I’ve therefore found that “microservices versus monolith” is a great false dichotomy to get modularity concerns considered, even though there are attendant complexities.

    1. 6

      Everywhere I’ve worked where there has been any appetite for microservices, it’s been with the specific goal of making most or all of them stateless.

      1. 4

        Do you mean java/C++ style OO or perhaps something like erlang? I would hardly call C++/Java method calls ‘message sending’ and I think that is part of the problem. Shifting to services fixes that somewhat and moves things closer to erlangs process/actor model where each service/actor manages state and has a known/documented protocol that can be networked and scaled.

        C++/Java OO is just making function calls look a bit different with a whole lot of boiler plate and poor abstraction.

        1. 1

          Thanks problem is that most popular OOP languages don’t have a way of nesting classes, so you can’t use them to enforce multiple levels of modularity. I’ve written about this before: https://www.hillelwayne.com/post/box-diagrams/

        2. 3

          Such back-and-forth movement in IT circles reminds me of an old Dilbert strip: https://thestandard.org.nz/wp-content/uploads/2011/06/dilbert-reorganisation.jpg

          1. 2

            It’s so true. It’s often hard to find people who appreciate that the “X is bad, let’s all move to Y” mentality requires a fair look at the good parts of X and the bad parts of Y to be an informed move. The grass is always greener.

            It’s better still to look a why these shifts occur. For instance, the decentralized web people acknowledge that the web started that way, but, as far as I’ve seen, haven’t looked into why centralization occurs.

            1. 1

              I think we know why centralisation occurs, and not just on the web.

              Marketing.

              1. 5

                Like hell. I used to centralize stuff because it was easier. Within one box, you have the ability to ignore the failure modes of a distributed system. Many jobs can also be done single-threaded to ignore the failings of concurrent operation. In one company and within certain distance, your clusters can use low-latency links make a limited version of a distributed system easier to handle. When distributed and/or truly P2P, you just turned a bunch of jobs that are straight-forward using 80’s-90’s method into things that are difficult in ways that might keep surprising us well past 2018.

                1. 2

                  Centralization prioritizes rare catastrophic failure over small consistent failure. So it’s easier until it’s all the sudden not.

                  1. 1

                    Yes, that’s the tradeoff of centralization. Taleb beats this into the ground in his writing.

                    I look at centralization/decentralization as a design parameter. The choice is contextual. The thing most people don’t seem to realize yet is that systems tend to centralize, so if you opt for decentralization you have to be aware of that and work against it somehow.

                    Bringing it back to software, the natural tendency for us in software is to keep adding to existing abstractions (classes, services, etc) unless we apply some diligence and either break things down periodically or set up our practices so that we are making new things rather than hacking onto existing ones.

                    1. 1

                      That can and does happen with some but is a lifecycle detail. Centralized services are easier to verify in the single, non-distributed case with well-understood models of failure when using high-availability clusters and recovery strategies. Note that centralized architectures with fault-tolerance are also usually modular with failure isolation and swappable components. That’s why there’s plenty of OpenVMS and NonStop systems out there with years of uptime with some having over a decade. There are battle-tested mechanisms for achieving that where it’s not hard for developers versus systems not designed that way.

                      Microservices’ track record on both verifying their interactions and overall reliability hasn’t been as good that I can see. They also tend to use components that weren’t individually designed for high reliability with ad-hoc, less-proven protocols for ensuring availability across those components. The correctness conditions are also more spread out. Although not necessary, the dependencies for correctness also seem to change at a much-faster pace which increases odds of systematic failure. Add to decentralized/distributed nature, you might also be looking at a lot of heisenbugs. If anything, it might take a lot more skill to get a decade plus uptime using the kinds of stuff I see popular in microservices. Maybe.

                      So far, it’s easier to get high availability and security out of a centralized architecture with strongly-consistent replication to nearby datacenters. That’s the status quo. I’m still waiting to see any data similarly showing developers easily get high-availability out of microservices, esp on commodity components and distributed. Those are what they claim they can do it with. I’ve seen some write-ups where people talk about doing microservices right. Lots of companies doing mission-critical stuff on them. There’s probable already examples to be found of some that go years without downtime or severe issues in performance. We’ll get more data coming in over time as techniques mature. I’m interested in whatever people have.

                  2. 3

                    I think there’s a deeper reason. Decentralization just costs more. The concept is partially captured by the notion of ‘economies of scale.’ More to your point, marketing reduces discovery costs. Interesting to note that even without overt marketing, products, ideas (or any other good) tend to centralize due to ‘word of mouth.’

                    1. 1

                      Yes I think the root reason is capitalism, and I don’t mean that in a boogie monster kind of way. Decentralized systems spread your risk if you have secrets in your business process. However in non-capitalist systems, like p2p the costs get reduced by decentralization because the burden isn’t shared by an individual but rather small contributions.

                      1. 2

                        “Decentralized systems spread your risk if you have secrets in your business process. “

                        They really don’t. This is a myth that might be worth a detailed write-up in the future. They can reduce risk in general but add risk by default. That’s because they turn one thing into several things plus their interactions. That automatically increases attack surface if we’re talking code or protocols. They often do a lot more with super-clever algorithms aiming for lots of desirable properties that get broken in new ways we’ve never seen before. Moreover, many of these run on the same CPU’s or OS’s that are commodities with the associated constant flow of 0-days. Depending on definition of risk, they might not reduce risk at all where three different organizations all using Windows or Linux can be compromised with one exploit in standard components. Then, they just directly reach for those secrets.

                        So, the truth is decentralized systems can reduce risk or increase difficulty on attackers if they have to break most of the components to defeat the system and there’s no shortcuts. That’s not often true given pentest results. If anything, it just takes extra time and money which the black hats going after business secrets have plenty of. So, you get increased chance of failure from decentralization with no meaningful increase in security in those cases. The benefits of decentralization should be assessed on case by case basis with assessor looking out for these common problems.

                        1. 1

                          It’s more basic than capitalism - it’s popularity. Note that the web is decentralized but players like FB and Twitter became large hubs so we see it as centralized. We could claim that it is an artifact of capitalism, but look at something like the original Twitter without their nudges. Some users become more popular than others and that leads them to get more followers. It turns Pareto.

                          The way that economics factors into is reduced discovery costs. For something like music, the urge is “I want to hear something good.” The stuff that other people think is good floods the system and makes it easier for you to find it. So, I’m not saying this is right or wrong, but rather that it is just something that happens. Favor your local independent coffee shop but as long as people can make choices something like Starbucks will rise as “the choice everyone knows.” There’s a centralizing tendency in systems.

                          1. 1

                            That’s not true and we’ve seen decentralized systems be wildly popular. Mastodon has over a million users right now.

                            1. 2

                              I’m willing to bet that the distribution of users over mastodon instances resembles a power law. It certainly isn’t flat. That’s my point. Popularity, whether money is involved or not, leads to hub-ishness and hubs are central to the nodes around them.

                              1. 1

                                Money accelerates that hubishness, without an exchange of money large instances must close their doors to new users after some point.

                                1. 1

                                  I agree, but money is not only one way that value is accumulated and transferred. The basic thing that I’m talking about underlies both physical and social processes. The hierarchical structure of the veins on a leaf is a cost reduction maneuver too. It’s the same as the appearance of hub airports even though no money is involved. I discuss these ideas here.

                2. 1

                  I find the diagram comparing a single monolith to multiple services misleading. It should be instead be simply drawn with each labeled microservice block within the monolith outline. As shown, the incorrect implication is that clear modular boundaries are not possible within a monolith. I find it to be propaganda to influence the viewer that monoliths are just one big mess in contrast to the lovely defined boundaries of microservices.