1. 30
  1. 9

    What an excellent article! Not just for the result but for the detailed explanation of how this kind of optimization is done. First improve the function you’re calculating. Then optimize the code so it runs well on a modern CPU. I’m particularly impressed the vectorizing compiler does so well with just a few accommodations, no weird assembler required to get 90%+ of the improvement.

    1. 5

      It’s common for vendor compilers (IBM, Intel, etc.) to replace calls to standard library functions with calls to optimized primitives.
      Here are some results for the original routine with Intel’s compiler; I added restrict to all pointer arguments

      • -O3 - the atanf2 call is replaced with a vectorized equivalent in the SVML library.
      • -O4 - it calls the standard library atanf2.
      • -O4 -march=native it uses AVX512 and calls the SVML atanf2 equivalent.
      1. 2

        The author of the article is also on lobste.rs: hi @francesco!

        Reflecting what @nelson wrote, I also liked the depth of the article. Through this article I also found out about https://uica.uops.info/, that tool is super useful for figuring out how different code behaves.

        1. 2

          Quick tell the NumPy and Matlab development teams :D

          1. 2

            @francesco: Nice artlcle! I’ve noticed a typo: you once quote the branches per element number as 0.3 instead of 0.13.

            1. 1

              Indeed, thanks, I’ve fixed it.

            2. 2

              First, this is a good idea. Second, this is such a good idea that people already did it, not only for atan2f, but for all functions in math library. Use SLEEF Vectorized Math Library.