1. 44
  1.  

    1. 8

      Python’s slow for some stuff, fast for other stuff. I noticed that processing a very large JSON file I happen to have to deal with was significantly faster with Python than Rust+serde_json - even in a release build.

      1. 3

        This surprises me because when I had to JSON encode a file with around 5,000 rows using Django + DRF, CPU was a significant bottleneck.

      2. 3

        I would have thought that python would be losing massively to native languages on CPU-intensive tasks, like node is. What am I missing?

        1. 9

          Python libraries don’t have to be written in Python – they can be Python wrappers around other languages like C. IIRC the standard-library json module is a wholesale adoption of the third-party simplejson package, which provided both Python and C implementations. And there are other specialized third-party JSON encoder/decoder packages out there like orjson which are built for speed and use implementations in C.

      3. 2

        I am surprised. I’d expect python Jason parsing to be reasonably fast but I would have expected creating all the python dictionaries and lists to just be mallocing in a tight loop and slow. Can I ask if you were parsing into a rust struct or having rust create some sort of equivalent nested map/list structure?

    2. 7

      By this, I mean the fact that MS is paying a team with Guido to speed up the language, and that we were all quite giddy when they announced that the target was a 50% gain at each release for the next 5 ones. We saw that coming, but still.

      This always seemed a little optimistic to me.

      In my last project, we had sum(itertools.starmap(operator.mul, zip(p, q))) in a lot of places, then we created a function for it, then we realized we had bugs because of length issues and added frenetically strict=True everywhere.

      What’s your use case for sumprod()?

      1. 5

        sum(itertools.starmap(operator.mul, zip(p, q)))

        What’s your use case for sumprod()?

        Isn’t that just a dot product?

        1. 3

          I’m dumb.

      2. 5

        This always seemed a little optimistic to me.

        It always depends somewhat on your baseline. HHVM managed to more than double performance each year for multiple years because PHP was very slow. Starting from C, you only manage that kind of speedup when you’re doing autovectorisation for the first time or similar. A number of Python implementations are a lot faster than CPython, doing things like type-specialisation in the AST, better dispatch in the interpreter, and JITing. 50% speedup over five releases is a 750% speedup overall and that’s less than the speedup you get going from a decent bytecode interpreter to a modest JIT in most Smalltalk-like languages. It seems quite feasible for CPython to get that kind of speedup.

    3. 1

      Wow those benchmarks are very disappointing. I’m dealing with some Python performance problems at work and I was really looking forward to the supposed performance gains.