Based on the graph, looks like this is giving up significant amounts of single threaded performance in exchange for giving up the GIL. This probably makes sense for Google, but I’m not sure how many of Python’s usual users would be interested in taking that hit.
Based on the graph, fibonacci still seems to be the benchmark used to test the impact of moving a huge monolithic codebase with lots of IO over to another runtime.
looks like this is giving up significant amounts of single threaded performance in exchange for giving up the GIL.
can you elaborate on this?
Potentially, the biggest advantage here is distribution. There’s been a number of times when I’ve wanted to reach for Python for use in non-server/tooling related stuff, but remembered that a bunch of people either don’t have or don’t realise they have Python installed. It’s a real shame it’s Python 2.7, though. I’ve been using 3.6 for all my latest projects and it is by far the best version of Python I’ve used.
Potentially, the biggest advantage here is distribution. There’s been a number of times when I’ve wanted to reach for Python for use in non-server/tooling related stuff, but remembered that a bunch of people either don’t have or don’t realise they have Python installed.
If you need to cross-compile, and you don’t have a CI server available, reaching for Go might make sense, but distributing Python scripts as self-contained executables is easy and has been done a long time. My preferred tool for that is cxFreeze, but PyInstaller is also a great solution, depending on what you’re doing. The old complaint used to be that the resulting executables are large, but since Hello, world in Go was around 5 MB last I checked, I’m not sure that applies much these days.
Edit: A literal Hello, world! in Go 1.7.4 on my machine is 1.5 MB, whereas it’s 3.3 MB with PyInstaller (using pyinstaller -F with Python 2.7.13 as the VM). Quick playing indicated that the proportions held roughly constant as I added more libraries. So Go’s definitely smaller, but probably not by a meaningful amount if you prefer Python.
My experience with cxFreeze and Py2exe have both been sub optimal. In particular, distributing things with PyQT and OpenGL support is quite a pain, especially to multiple different platforms at once. I’ve played with PyInstaller only briefly, but from my experience these tools all suffer from a start up cost - it’s quite hard to get started. For one of my projects I spent a long time chasing through old commits and manually applying patches to get things to work. So my comment is more a comment of desire to make that whole process trivial, rather than complex. That being said, in it’s current state, grumpy looks like it doesn’t answer those problems just yet
Out of curiosity, have you tried PyPy in stackless mode? It’s supposed to greatly help with concurrency, but I’ve not seen many field reports on it.
Nope - the problems I face with Python are rarely to do with scalability that is solved through more concurrency, so I’ve never had the need to!
Not having an interpreter mode is kind of a big omission but maybe one they will fix. For data science as well as for general development, an interpreter is a huge help.
One has to wonder why not just use the compiler to compile the code to Go, and then just move forward with Go… Other than the REPL, is Python really giving them an advantage? I’d have to assume that at YouTube’s size and request volume that they’re using a strict subset of Python features, with a strict style guide.
Nevertheless, this is really neat.
EDIT: On second thought, nevermind. The generated code is extremely far from being easy to work with.