1. 20
    1. 5

      I’m actually surprised. Completely did not expect the smaller allocation to make any difference. Since the only difference should be the initialisation time… that sounds like a lot of time for simple clears. Unless there’s more page faulting than I would expect.

      The next improvement then would be to preallocate that slice and clear instead of making a new one.

      So the accounting of “alloc”s seem weird to me. Here are 4 extra variants:

      Benchmark1          1224            899297 ns/op           30092 B/op        218 allocs/op
      Benchmark2           640           1803190 ns/op           18012 B/op         24 allocs/op
      Benchmark3         24681             46804 ns/op           17416 B/op          8 allocs/op
      Benchmark4          2342            440008 ns/op           17418 B/op          8 allocs/op

      1 is original, 2 is original with clear instead of make, 3 is slice, 4 is slice with clear instead of make.

      I feel like this will need some deep dive to understand how make is better optimised than a loop clearing the values. And why does benchmark 4 have the same number of allocs as 3.

      And why is the difference so large for clearing? Clearing 256 bytes of data should be trivial, unless go doesn’t optimise that loop at all…

      1. 3

        Looks like go is silly at (not) optimising trivial loops. make for slice becomes DUFFZERO $276 which I assume is unrolled. Clearing the same slice becomes a really standard, basic, byte-at-a-time loop, which… why are you like this, Go?

        The funny thing is that this actually ends up going against the title of the post. Go can’t optimise a basic slice clear and a (potentially) memory-wasteful version stays 10x faster, because it’s got known optimisations hardcoded.

        The alloc count turns out to not include stack allocations, which makes sense, so the count is equivalent for both the make variant and for a reused global.

        1. 5

          I thought one of the points of keeping Go a simple language was that they can make exactly these kinds of compiler optimizations?

          Also explains why my dumbest thing I could think of in Rust was 3 times faster :-/

          1. 3

            Compile times. The compiler team has been up front that they are willing to trade performance for faster compile times.

            As of a couple of years ago, the rules for what kinds of functions could be inlined was a great example of this. Only the simplest of functions were eligible.

          2. 2

            They kept everything simple. For example, only adopting a register-based calling convention two years ago.


        2. 2

          That hasn’t been my experience. I’m having difficulty writing code that clears a slice without having it be optimized to a runtime call that does vectorized clearing, etc. See https://go.godbolt.org/z/e9hnbcaKd. What code did you use to clear the slice?

          1. 1

            So both a basic for i:=0... and for i:=range... with slice[i]=false is slower (10x and 2x respectively) than the DUFFZERO that go uses for clearing a new local.

    2. 5

      It’s possible to exploit the fact that inputs to hasDuplicates function are always of a known fixed size, and avoid extra allocation like this:

      const width = 14
      func hasDuplicates(bb []byte) bool {
      	if len(bb) != width {
      		panic(fmt.Errorf("want []byte of len %d, got %d", width, len(bb)))
      	var x uint64
      	for _, b := range bb {
      		x |= (1 << (b - 'a' - 1))
      	return bits.OnesCount64(x) == width

      Same result, but we use a single uint64 as a bitmap.

    3. 1

      I would’ve liked to see [256]bool included and see whether avoiding the heap allocation entirely helped.

      1. 1

        That’s similar to what I was trying to test in the other comment. It seems that the whole slice ended up on the stack anyway. Even with the size only mentioned in make arguments.

        1. 1

          Yeah I did a test as well and there’s no difference in performance.

    4. 1

      This is a repost.

      1. 2


        Not a repost actually just the same AoC problem and solution.

      2. 2

        I’m sorry to hear that. I don’t even check; I just count on the duplicate detection to let me know. I would delete the post, but now this thread has comments.

        EDIT: ah, good—not a repost.