1. 51
  1.  

  2. 3

    This article is maybe a bit more skeptical of Go’s GC tradeoffs than is warranted. Like Compaction – yes, Go trades off a compact heap for shorter pauses. But that’s maybe not a bad thing – we’re in a 64-bit world and for a typical app with a smallish working set it’s going to be a long, long time before you fragment 64-bit virtual address space enough that you’ll OOM.

    There’s certainly applications where compaction matters, but they’re a lot less prevalent than they were in the 32-bit era and it feels like we never adjusted our concern about the magnitude of the problem accordingly – we just kept assuming compacting collectors made sense in the average case and never revisited the sanity of that assumption.

    1. 15

      long time before you fragment 64-bit virtual address space

      No one is concerned about the address space here. Fragmentation wastes physical memory.

      There’s certainly applications where compaction matters, but they’re a lot less prevalent than they were in the 32-bit era

      Compaction is no less useful in a 64-bit environment.

      1. 0

        No one is concerned about the address space here. Fragmentation wastes physical memory.

        I’m extremely skeptical that this is a problem most apps would encounter. There are pathological cases in which you could, say, keep one byte per page alive and keep more than physical memory’s worth of those pages in your working set, but you’d really have to be working at it to get to that point. Your average Go deployment just doesn’t have that big a working heap – and isn’t doing heap allocations that granular. Optimizing for the corner case doesn’t make sense.

        1. 2

          People downvoting this to zero might consider asking themselves how it is that C and C++ programs, or Python and Ruby servers with their heavy heap usage and 100s of megabytes of short-lived garbage generated per-request, nevertheless somehow manage to survive in production with months or more of uptime without being crippled by the dread spectre of physical memory fragmentation due to their total lack of memory compaction.

          1. 3

            C++ avoids memory fragmentation by using smarter allocators that reuse the same address segments whenever possible.

            Python and Ruby use malloc, one of the aforementioned smarter allocators.

            1. 3

              Indeed.

              Now, the $1,000,000 question: does Go, the thing we’re discussing here, use a malloc-style free-list allocation system, or does Go use a bump pointer?

              (the answer)

            2. 1

              C++ and C programs don’t generally make heavy use of a GC, so they don’t need a compacting GC.

              The uptime of Ruby and Python applications tends not to be months, at least not in my experience. Using Eye or Monit to kill the app and restart it when it exceeds a few hundred megs is very common.

              1. 1

                I don’t know if I’d call that ‘common’ (I’ve been doing a lot of Ruby for the past 8 years). Your average Rails application boots at a few hundred megabytes and tends to stay roughly in that range, unless you’ve got a leak (accidentally retained ‘dead’ memory, or unreclaimable memory like strings you’re turning into symbols on a frequent basis) – then, people in a hurry tend to start killing it and rebooting rather than hunting down the real issue. But that’s not really related to any kind of physical memory wastage, and you see that behaviour even in the presence of compacting collectors (we reboot a lucene instance once a week because it’s easier than hunting down its leaks).

                I’ve never seen that done in production for the issue “lobstersinabucket” was talking about – everything that should be GC’d getting GC’d, but in such a pattern that sparsely populated pages of memory leave a lot of physical memory wasted, and the reason for that is, as peter pointed out in the other reply, that Ruby’s allocator re-uses free space in pages where it can.

                Go’s allocator also does this, which is why the idea that without a compacting GC a significant proportion of physical memory will end up wasted on holes in virtual memory pages is unlikely in most workloads.

                1. 1

                  I don’t know if I would call that common…

                  Well, here we disagree.

                  Would you say it is common to run Ruby and Python web applications under load – not heavy load, just the moderate load of a site like Airbnb – without restarting them for months at a time?

                  1. 1

                    I would say that most (like 99%) of Rails installations are not experiencing the, uh, “moderate” load of a site like Airbnb

                    edit: anyway, that’s kind of tangential to the point. Even in high-traffic installations where nodes restart often, it’s not generally swiss-cheesed vm pages choking out physical memory that motivates the restarts.

                    1. 0

                      I would say that most (like 99%) of Rails installations are not experiencing the, uh, “moderate” load of a site like Airbnb

                      Although it may seem high, practically speaking you get only tens of requests per second per app node at a site like that. It’s not like your doing 1000 QPS per server.

                      anyway, that’s kind of tangential to the point.

                      Well, it was something that you offered as support for your idea – and “falsity proves anything”. The original article raised concerns about Go advocates and their tendency towards exaggeration with regards to this issue and you would seem to be sadly on trend.

        2. [Comment removed by author]

          1. 2

            Yes and no. If you’ve got cache-locality sensitive code, you should really be taking steps to ensure that that data is being allocated in a cache-friendly way to begin with, not allocating it willy-nilly and then hoping the GC pushes it all nicely together for you. Compacting can help you improve locality only in places where you got your allocation patterns wrong to begin with.

            1. [Comment removed by author]

              1. 3

                Your lecture is besides the point.

                I’m…not lecturing you? Just trying to have a conversation, here.

                Thing is, all code is cache-locality sensitive. A compacting GC can improve cache locality for all code automatically. Your argument that an automatic, compacting GC is not desirable because you can manage memory manually makes no sense.

                I’m not talking about managing memory manually – I’m talking about caring about your allocation patterns and how they impact your performance. You have to do that whether or not you have a Garbage Collector at hand.

                Consider Minecraft – they’re not managing their memory manually, but they absolutely care about cache-locality and allocating data in a cache-friendly manner. They’re definitely not just leaving that to the whim of the GC. Once you’ve got your major hotspots allocated in a cache-friendly manner, any further wins coming from the compactor are a) completely unpredictable between runs (you’re literally relying on blind luck) and b) swamped by other concerns anyway. Considering that compaction isn’t free, that’s not a compelling argument for it imo.

        3. 2

          I think what the author of this article doesn’t understand is that Java and Go are very different languages and as such the characteristics of a good GC for Java are quite different from a good GC for Go. Java creates an order of magnitude more garbage than Go, often creating large numbers of objects with very short lifetimes. This means the GC is very stressed all the time in aggressively cleaning up short lived garbage. Go on the other hand allocates most small objects on the stack so the GC never has to worry about them at all. This frees up the GC to deal with mostly larger, mostly longer lived objects which is a much easier task than the Java GC has to deal with.

          The other side of GC being an easier task for Go is that it’s much easier to optimise. When you think about it it’s not surprising that Go’s GC performs significantly better than Java’s. It’s just a much easier task.

          1. 13

            This means the GC is very stressed all the time in aggressively cleaning up short lived garbage. Go on the other hand allocates most small objects on the stack so the GC never has to worry about them at all.

            I develop a Go application which with only moderate load generates 100MB/s of very short lived, small objects on the heap. This results in many gigabytes of RSS overhead which a different GC design may not have.

            Which GC is appropriate is more about allocation patterns than languages.