1. 34
  1.  

  2. 18

    Here we have an unbiased report by a top notch consultant discussing in depth about the technical trade-offs of a sophisticated database product developed with millions of dollars of investment right after the high visibility rethinkdb failure.

    And the hacker news comment section is mostly comprised of: “Eww, I don’t like the name, I’m not gonna read about it”. No wonder rethinkdb had trouble selling itself.

    1. 5

      On the other hand, they’re not wrong. Names and perception matter, in addition to technical excellence.

      1. 5

        It is an opinion so you can’t say they are right or wrong. The fact that many, (or few but vocal) people are whining might warrant a change just to cause them to find a more valid complaint (The next is going to be performance while comparing it to MongoDB while ignoring all other trade offs). It certainly already warrants some investigation into the numbers of people in either camp.

        1. 2

          Oh as a side note, Donald Trump sort of proved having people talking about you helps no matter what the content of the discussion. So maybe its fine.

        2. 4

          Exactly. It was ridiculous. People even upped the game by submitting articles about how horrifying or beautiful the roaches were. This on a thread about a cutting-edge database trying to compete with Google’s Spanner/F1 with so much neat technology.

          You’d think more would be talking about the tech.

        3. 8

          Alright. Back to the tech. I think it would be interesting to run their protocols through tools such as TLA+, Verde, or Ironfleet. TLA+ at the least as it’s easiest to learn and use. A bunch of incorrect protocols won’t work for resilience no matter how good the implementation is. Not saying they are but good to check.

          https://en.wikipedia.org/wiki/TLA%2B

          http://verdi.uwplse.org/

          https://lobste.rs/s/sqxnkt/ironfleet_proving_practical

          Note: Forgot Verdi’s name. Accidentally found and submitted Ironfleet while trying to find it. Got more reading to do tonight. :)

          1. 3

            I’ve been using cockroachdb as the metadata store for a distributed file system. The possibility for viewing out of order inserts when running reads from different nodes was especially interesting, but is still acceptable for my use case.

            1. 1

              That’s interesting since it’s similar to an idea I had for FoundationDB. It had a size limit for fields. Idea was to put anything big in a clustered filesystem the DB had a link to. Sync them up somehow. You ever see a write up on anythinh like that? I figure there could be unforseen issues.

              Note: Spanner builds on GFS, too. So I know it can be done.

              1. 1

                Are you sure Spanner builds on GFS and not Colossus?

                1. 1

                  I could be getting mixed up since it’s been a while. Let me look. Ok. It builds on Colossus which is “the successor to the Google File System.” Good catch.

                  Part of reason for my memory slip may be that there was no paper on Colossus when I looked at Spanner. I likely assumed it was an updated, distributed filesystem based on GFS. A Google result in 2012 shows it is a clustered, file system with sharded metadata, client-driven replication, and Reed-Solomon.

                  So, my intuition was close enough. They’re still running Spanner on a clustered filesystem.

            2. 2

              +1 for strong Aliens reference game