I’m kinda confused - the graphs at the beginning seem to show the opposite of what the author is claiming. For a given number of executions, the shorthand (in blue) seems to take more time, not less. What am I missing?
Looks like the axes are flipped. The x-axis is actually time taken, and the y-axis is the number of executions. The labels are wrong, too: notice how it says the “number of executions” is 0.000001.
Really? Since {} is recognized by the parser, I’d expect to generate the opcode directly as part of the bytecode compilation pass.
Frankly, I’m surprised that dict() doesn’t compile to an opcode, since it’s easy to inline. I guess doing that would take away the ability to rebind what dict() does in the local scope (but I don’t know why anyone would care besides that).
I’m kinda confused - the graphs at the beginning seem to show the opposite of what the author is claiming. For a given number of executions, the shorthand (in blue) seems to take more time, not less. What am I missing?
Looks like the axes are flipped. The x-axis is actually time taken, and the y-axis is the number of executions. The labels are wrong, too: notice how it says the “number of executions” is 0.000001.
Why is this a surprise? Function calls will always have slight overhead, because of the indirection of a jump. That’s why inlining is a thing.
The surprising thing is I would expect
{}to be syntactic sugar fordict().Really? Since
{}is recognized by the parser, I’d expect to generate the opcode directly as part of the bytecode compilation pass.Frankly, I’m surprised that
dict()doesn’t compile to an opcode, since it’s easy to inline. I guess doing that would take away the ability to rebind whatdict()does in the local scope (but I don’t know why anyone would care besides that).You can even rebind it globally.
EDIT: use
builtinsinstead of__builtins__, compare https://stackoverflow.com/questions/11181519/python-whats-the-difference-between-builtin-and-builtinsI’m pretty sure
log10casts to float before executing.