1. 1

    Well, this taught me one thing I didn’t know (that ‘O’ really can literally be read as ‘on the order of’, per https://en.wikipedia.org/wiki/Big_O_notation#History_.28Bachmann.E2.80.93Landau.2C_Hardy.2C_and_Vinogradov_notations.29). But it’s unfortunate that the post abandons some precision and accuracy, even if the intention is to introduce big-O to newcomers.

    For example, big-O is an upper-bound, not a tight bound. Also, the reason we throw away the lower-order terms is because the higher-order terms dominate as n gets bigger. For small n, the lower-order terms can dominate.

    Seems like these issues come around every time there’s a new ‘big-O for beginners’ article - which is often - so maybe we have to treat this like grammatical usage and figure that the battle is lost. I checked, and I wrote a (much longer) response like this back in 2008 (https://stackoverflow.com/questions/107165/big-o-for-eight-year-olds/107883#107883) so I guess I’m just a consistent curmudgeon :)

    1. 1

      To my money, the author does not actually understand the CAP theorem: anything that prevent access to the data is a partition of the system. Case of partitions include at least operators' faults, bugs or power issues in the replicas, problems that occurred only once, even many user issues (eg hardware issues).

      Also, the fact that developers didn’t handle outage exceptions in their code, simply means that they did not need either availability or consistency in the first place.

      Nevertheless, it might be interesting as an highly available storage.

      1. 25

        The author is responsible for the CAP theorem.

        http://www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf

        1. 7

          Which is an opportunity for wry reflection.

          1. 0

            Lool! :-)

            Still I’d argue the theorem is about the data, not about the nodes storing/producing/consuming/distributing them.

            It is the data system that can be consistent, available or partition tollerant, and from this perpective an operator error is not different from a network failure: in both case the client node cannot access the data it needs (thus the system is partitioned), loosing availability.

            1. 7

              I feel like the author was quite clear that they were describing the difference between the theorem’s formal requirements and what’s important in practice. They acknowledge that partitions can happen in this strict sense:

              The purist answer is “no” because partitions can happen and in fact have happened at Google, and during (some) partitions, Spanner chooses C and forfeits A. It is technically a CP system. We explore the impact of partitions below.

              1. 9

                I think the rich irony comes from the fact that Brewer and many distsys practitioner disciples have used the theorem to, to put it indelicately, poop on practitioners that have minimized the impact of P. And now comes Brewer himself, setting aside robe and mitre!

                1. 2

                  I can’t disagree with that.

                2. 0

                  Yes I read that, but what sounded me as a misunderstanding of the theorem was that only network related issues were considered “partitions”.

                  Instead a partition simply is the condition of a chunk of shared data that cannot be obtained by a part of the system (even a single node) that have requested it. Wherever the whole system is down for a bug or a whole set of routers have been misconfigured, it doesn’t matter: there is a partition in the system from the theorem point of view.

                  Well, yes, I’m probably a purist when it comes to technical jargon…

                  But, actually, I should have read the author name before posting. Lesson learned: Google can hire them all! :-)

                  1. 0

                    That’s not the right definition. A partition is the loss of one or more network messages. It’s not a data-centric condition at all.

                    1. 1

                      To support this (because I see a -1 incorrect vote), here’s section 2.3 from Gilbert and Lynch’s paper:

                      “In order to model partition tolerance, the network will be allowed to lose arbitrarily many messages sent from one node to another. When a network is partitioned, all messages sent from nodes in one component of the partition to nodes in another component are lost.”