1. 19
  1.  

  2. 6

    Posted this in the orange place, figure that it’s worth saying here:

    FWIW, I think this mostly shows that the GOGC defaults will surprise you when your program keeps a small amount of live data but allocates quickly and is CPU bound. By default, it’s trying to keep allocated heap size under 2x live data. If you have a tiny crypto bench program that keeps very little live data around, it won’t take many allocations to trigger a GC. So the same code benchmarked here would trigger fewer GCs in the context of a program that had more data live. For example, it would have gone faster if the author had allocated a 100MB []byte in a global variable.

    If your program is far from exhausting the RAM but is fully utilizing the CPU, you might want it to save CPU by using more RAM, equivalent to increasing GOGC here. The rub is that it’s hard for the runtime ever be sure what the humans want without your input: maybe this specific Go program is a little utility that you’d really like to use no more RAM than it must, to avoid interference with more important procs. Or maybe it’s supposed to be the only large process on the machine, and should use a large chunk of all available RAM. Or it’s in between, if there are five or six servers (or instances of a service) sharing a box. You can imagine heuristic controls that override GOGC in corner cases (e.g., assume it’s always OK to use 1% of system memory), or even a separate knob for max RAM use you could use in place of GOGC. But the Go folks tend to want to keep things simple, so right now you sometimes have to play with GOGC values to get the behavior you want.