1. 18
    1. 2

      The code assumes the comparison is cheap, and if you happen to find the match on the first line, you still do N comparisons until the result is returned. Also, in my experience, the times I used a binary search, the size of the array wasn’t a fixed size.

      1. 2

        *log N comparisons

        I don’t think that not optimizing for the “quick” case is an issue: using a “regular” binary search, half the elements require exactly log N comparisons, and 7/8 of the elements require at least log N - 2 comparisons, so by always doing log N comparisons we aren’t losing much.

        That said, the D code gets optimized to use conditional moves, eliminating all branching. I haven’t been able to convince the compiler to do the same with the Zig version, you can try yourself here.

        1. 3

          Passing the correct flag to zig (-OReleaseFast instead of -Drelease-fast) makes the code near identical.

          -Drelease-fast was the old flag used by the zig build system while -OReleaseFast is the flag used when invoking the compiler directly. The build system now uses -Doptimize=ReleaseFast. Not sure why godbolt didn’t show you an error…

          Edit: looks like zig build-exe has a -D option to define c macros… that’s definitely a footgun :/

            -D[macro]=[value]         Define C [macro] to [value] (1 if [value] omitted)
          
          1. 2

            Ah, that’s a dumb mistake to make, thanks for correcting it!

    2. 1

      That’s neat. I was curious what the Zig code would look like. Pretty similar to the my version in D.

      Performance wise, the D compiler (ldc in this case) generatea pretty tight code, three instructions per comparison, using conditional moves.

      But I haven’t done any serious performance benchmarks. That would be an interesting exercise.