1. 13
  1.  

  2. 2

    I’m kinda confused - the graphs at the beginning seem to show the opposite of what the author is claiming. For a given number of executions, the shorthand (in blue) seems to take more time, not less. What am I missing?

    1. 4

      Looks like the axes are flipped. The x-axis is actually time taken, and the y-axis is the number of executions. The labels are wrong, too: notice how it says the “number of executions” is 0.000001.

    2. 2

      Why is this a surprise? Function calls will always have slight overhead, because of the indirection of a jump. That’s why inlining is a thing.

      1. 10

        The surprising thing is I would expect {} to be syntactic sugar for dict().

        1. 3

          Really? Since {} is recognized by the parser, I’d expect to generate the opcode directly as part of the bytecode compilation pass.

          Frankly, I’m surprised that dict() doesn’t compile to an opcode, since it’s easy to inline. I guess doing that would take away the ability to rebind what dict() does in the local scope (but I don’t know why anyone would care besides that).

          1. 7

            You can even rebind it globally.

            $ python3
            >>> dict({1: 2})
            {1: 2}
            >>> import builtins
            >>> builtins.dict = list
            >>> dict({1: 2})
            [1]
            

            EDIT: use builtins instead of __builtins__, compare https://stackoverflow.com/questions/11181519/python-whats-the-difference-between-builtin-and-builtins

      2. 1

        I’m pretty sure log10 casts to float before executing.