1. 19
  1.  

  2. 3

    This is quite the hack, especially since it relies on virtual memory to work. I wonder if this is something that would work on the .NET or JVM GCs? It also makes me think of that story of the video game programmer that allocated 4MB (I think) “just in case” when developing a video game.

    1. 7

      This does seem like a hack. On the JVM you can just set the min / max heap, which would roughly accomplish the same thing.

      When I used to work on web services on the JVM that required low latency, we would generally configure the min and max heap sizes to be the same, and then pin them to a specific amount of memory we had available on a machine type. Then we would tune the new size / eden space to fit the short term / temporary request objects. Essentially what’s described on this page: https://docs.oracle.com/cd/E19900-01/819-4742/abeik/index.html This was a few years ago though, so it may have changed.

      I’m surprised golang doesn’t support something like this out of the box. Possibly this issue? https://github.com/golang/go/issues/23044 I guess if the hack works then it’s fine :)

      1. 2

        I’m not saying that it’s bad to use a hack like that, just that it’s something that definitely relies on a quite a few implementation-defined behaviors.

        There are some quotes about how a garbage collector allows you to pretend you have infinite memory. This isn’t infinite, but it is quite the large allocation.

        1. 1

          Yes, that issue. Both min and max heap sizes seem useful for different situations; some .NET GC developer suggested “you shouldn’t have to know about GC internals” should be the qualification for GC knobs, and it seems like this passes that test: independent of GC internals, you sometimes know you can use X GB of memory before Bad Things Happen, or you know you don’t care about usage up to Y MB.

        2. 2

          Is the first story here the one you mentioned? https://www.dodgycoder.net/2012/02/coding-tricks-of-game-developers.html

          1. 2

            I read it from the Gamasutra link originally. It stuck in my head though, and it’s the sort of thing I’d definitely do when faced with a similar type of situation.

            I haven’t been there yet, tho.

          2. 2

            It’s certainly a hack, but we all rely on virtual memory to work. Virtual memory is an integral part of the concept of a process, which is the core abstraction an OS provides. The thing it depends on which is actually notable is over-provisioning, which can be disabled, though in practice, we all depends on over-provisioning too due to the fork/exec pattern.

            If you have the choice between adding one memoryBallast := make([]byte, 1024 * 1024 * 10) to the main function and forking the go runtime, making the necessary changes to tune the GC, then deploying your fork to production, then go through the process of hopefully getting your patch upstreamed, at which point you’re either safe or your patches are rejected and you have to maintain your own fork forever… Well, I know which one I’d choose.

            1. 1

              I wonder if this is something that would work on the .NET or JVM GCs?

              At least the JVM’s G1GC uses a horribly named “nursery” so most GC runs only look at new objects, so they’re quite inexpensive. I think it shines in exactly this kind of use case.

            2. 2

              The other thing about this strategy that seems nice to me is that I think it should be quite stable. At the start, the program allocates ~400MB between GC cycles now. You want it to allocate 10GB between GC cycles instead. Two strategies:

              1. you turn GOGC up to 2500, or
              2. you add 9.6GB of ballast and leave GOGC at 100

              Now let’s say a subsequent change to the program makes it use 50% more live memory. What happens next depends on which option you picked:

              1. with GOGC at 2500, the program now allocates ~15GB between GC cycles.
              2. with 9.6GB of ballast, the program now allocates ~10.2GB between GC cycles.

              Or alternately, a subsequent change to the program makes it use 50% less live memory:

              1. with GOGC at 2500, the program now allocates ~5GB between GC cycles
              2. with 9.6GB of ballast, the program now allocates ~9.8GB between GC cycles