1. 130
  1.  

  2. 17

    Oh boy, this article is so good I want to upvote it lots of times. I love this quote:

    Arranging memory like javascript does would be like writing a shopping list. But instead of “Cheese, Milk, Bread”, your list is actually a scavenger hunt: “Under the couch”, “On top of the fridge”, and so on. Under the couch is a little note mentioning you need toothpaste. Needless to say, this makes doing the grocery shopping a lot of work.

    (Of course this goes for lots of languages, not just JS, and you can easily end up with this even in C/++ if you overuse heap allocations.)

    1. 12

      The comment thread on HN is quite good. Joseph answers a bunch of questions, other CRDT researchers chime in about RON/Chronofold https://arxiv.org/abs/2002.09511 (a new result). Sorry for the HN link but the discussion is at least as good as the article: https://news.ycombinator.com/item?id=28017204

      1. 7

        I think one lesson is “Don’t trust any performance comparison that doesn’t accept pull requests.”

        1. 9

          “Don’t trust a statistic which you haven’t manipulated yourself”, is a (paraphrased) quote attributed to Churchill (decidedly not German, as I thought before).

          Doing benchmarks correctly is so hard that I would default to “suspicious until proven otherwise” when looking at literally any result.

        2. 5

          This article talks a bit about how representing trees as trees is inefficient (lots of pointers). This reminds of me of a good talk about representing ASTs as arrays which allows parallelise transformations on them (think SIMD or even GPU).

          “Programming Obesity: A Code Health Epidemic” https://www.youtube.com/watch?v=UDqx1afGtQc

          In the talk Aaron Hsu presents a Scheme compiler written in 17 (!) lines of APL which can compile code on GPU.

          1. 4

            This marvelous article is full of extremely bright gems. I think this is one of the favorites:

            This sounds complicated - how do you figure out where the new item should go? But it’s complicated in the same way math is complicated. It’s hard to understand, but once you understand it, you can implement the whole insert function in about 20 lines of code

            That is so good an explanation of what I’m trying to tell people in my trainings, I’m gonna have to sneak this exact quote in my training somehow!

            1. 3

              That said, git repositories only ever grow over time and nobody seems mind too much.

              On my clone of Linux from September 1st, 2020 (commit 9c7d619be5a002ea29c172df5e3c1227c22cbb41) the .git folder is 1183 megs larger than the next biggest folder, drivers:

              ckie@cookiemonster ~/git/linux -> du -sBM * .git | sort -nr
              1848M   .git
              665M    drivers
              134M    arch
              

              Shallow-cloning improves it but it’s still pretty big:

              git clone --depth 10 file:///home/ckie/git/linux linux-shallow
              
              ckie@cookiemonster ~/git/linux-shallow -> du -sBM .git
              211M    .git
              

              I’m pretty lucky to have unlimited bandwidth and a (relatively) fast internet connection most of the time but not everyone can afford that, and, even for me, running git pull (note, not the shallow checkout - I haven’t fixed the remotes on that one) still hasn’t finished. (Future note: internet speed randomly dropped to dialup speeds while writing this comment, so I can’t really run that test semi-reliably anymore)

              1. 1

                Is the whole process faster if you download a tarball of the entire repo (.git included) and then pull up to the latest or is a fetch and rebase some sort of O(n) operation in the depth of the commit tree?

                1. 1

                  It won’t save the bandwidth, but if you have flaky or unreliable connection, it may be much easier to download bundle of the linux git repo (and verify, clone it locally, add proper remote afterward) instead of cloning it over internet from scratch.

                  https://www.kernel.org/cloning-linux-from-a-bundle.html

                  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/clone.bundle

                  It’s not prominently listed on kernel.org, so not everyone is aware of that.

                2. 1

                  Yeah implementation details matter.