An overview of performance issues with Python and how to work around them.
Regarding startup time, I heard about the Mercurial command server a long time ago, so this was a good reminder. The idea of the “coprocess protocol” in Dev Log #8: Shell Protocol Designs is to make it easy for EVERY Unix binary to be a command server, no matter what language it’s written in.
I have a cool demo that uses some file descriptor tricks to accomplish this with minimal modifications to the code. It won’t work on Windows though, which may be an issue for some tools like Mercurial.
I also embed the Python interpreter and ship it with Oil, which reduces startup time. sys.path has a single entry for Python modules, and every C module is statically linked. I thought about making this reusable, but it’s a pretty messy process.
Rewriting Python’s Build System From Scratch
Dev Log #7: Hollowing Out the Python Interpreter
Regarding function call overhead, attribute access, and object creation, the idea behind “OPy” is to address those things for Oil, although I haven’t gotten very far along with that work :-)
I guess the bottom line is that we’re both stretching Python beyond its limits :-/ It’s a nice and productive language so that tends to happen.
Thank you for the context! It is… eerie that we seem to have gone down similar rabbit holes with embedding/distributing Python!
You may be interested in https://github.com/indygreg/python-build-standalone, which is the sister project to PyOxidizer and aims to produce highly portable Python distributions. There’s still a ways to go. But you may find it useful as a mechanism to produce CPython build artifacts (and their dependencies) in such a way that can easily be recombined into a larger binary, such as Oil.
The general coprocess protocol looks really interesting. It might be nice to surface it somewhere more visible and trackable…github wiki pages are notoriously bad for being able to keep up with changes to docs like this.
Thanks, great article. I feel like a lot of these points agree/resonate with my experiences too. However, I must respectfully hope the opposite when you say “And maybe, just maybe, I can cause the smart people who actually maintain Python distributions to […] provide improvements to mitigate them.” I’m not convinced that many of these things should necessarily be changed in the general case.
I’ll agree they’re certainly worth being aware of, and can be a gotcha in some cases. But at the same time, I see a couple big issues with potential changes to address them. First is just the general danger of changing semantics to improve performance. There may be ways to avoid that, but it seems like issues around function call and member access overhead are pretty deeply baked in to the dynamic nature of python. If we can come up with optimizations that improve performance without changing that, great, but I wouldn’t want this to become a question of “How do we restrict the language to make cpython faster?”
Second, and perhaps even more important in my mind would be to not make changes that impair what I’d call the “developer ergonomics” of the language, even where semantics are preserved or unspecified, for the sake of performance. I feel like there are many things about python that contribute in tiny but compounding ways to make it generally a pleasure to use, and I’d suspect that’s a big part of the language’s popularity with others as well. I think what really brought this to mind for me was your point about inefficiencies stemming from the mapping of packages and modules to filesystem structure. While I totally get why projects like PyOxidizer (which is super cool, btw) would want or need to change this for certain use cases, I also think it’s a hugely developer friendly default behavior that I wouldn’t want to see lost in the “standard” implementation.
Some of this really look like trivial compiler optimizations that other languages provide on -On switches. Any chance one can get a bytecode optimizer for Python?