1. 32
  1.  

  2. 14

    This is often undervalued, but shouldn’t be! Moore’s Law doesn’t apply to humans, and you can’t effectively or cost efficiently scale up by throwing more bodies at a project. Python is one of the best languages (and ecosystems!) that make the development experience fun, high quality, and very efficient.

    As a Python programmer, this is a perspective that has never entirely made sense to me. Well, I should say hasn’t made sense to me for the last few years, at least. I feel like many people have this held-over dichotomy in their heads where Python is expressive and enjoyable, and thus one can write production code quickly, whereas other languages are not expressive and not enjoyable and thus code takes a long time to write. But while this might have been true in the past—while your performant options in the past might have all been some variation on fighting with the compiler, diagnosing obscure compilation errors, waiting for interminable builds—none of those are actually hallmarks of development in a typed, performant language anymore (except for C++). But modern compilers are fast, languages like Nim and D and Haskell are expressive and have powerful type inference. And generally speaking we are now in an era where a type system is not just a necessary evil for a compiler that’s too stupid to know how to interpret any variable without being explicitly told; they are universally recognized to be programmer aids, helping in writing correct code as well as performance. Without wading into the types vs tests debate, at the very least there is one—at the very least there’s a recognition that type systems, too, are for making the devlopment experience high quality and very efficient.

    If I were being cynical I would say that sometimes arguments like this feel like it’s really mostly about the “fun” part. That “programmer happiness” part, which is often conflated with programmer efficiency and expressiveness, but isn’t actually the same. It can almost feel like a hostage job—“I better enjoy the language I’m writing in, otherwise I couldn’t possibly be productive in it!”

    1. 8

      I find typed/compiled languages more fun actually, even C++. Because it drives me absolutely fucking bonkers to run a program and get a really stupid type error, fix, re-run, and get another type error. The compiler just tells you all the type/syntax problems up front and you can fix all of them with minimal rage.

      1. 6

        yeah, mypy and typescript have been a boon to productivity. Especially strict null checks.

        The advantages of the weaker languages is not having to play the “I have to make containers for all my thingy” games. Sometimes just a tuple is nice.

        Some of the existing typed languages don’t always follow the “if it’s conceptually simple, or if it’s easy for a computer to do, it should be simple in practice” rule. Especially when you’re crossing library boundaries and now spending a bunch of time marshalling/unmarshalling (sometimes necessary of course!) functionally equivalent stuff.

        Devil in the details of course

      2. 6

        I think your confidence in compilers is perhaps misplaced. It’s not just a matter of speed–other factors, like memory usage and even ability to compile period are relevant.

        none of those are actually hallmarks of development in a typed, performant language anymore (except for C++).

        I’d argue that the only widely-used performant typed language is C++, possibly Fortran (thought rust is getting close).

        The reason for this is that the farther you get into the problem domain (and the more comfortable it is for you), the farther you move away from actual silicon running instructions. It’s not a false dichotomy.

        The best-performing code will be written in assembly, but it’ll be terrible to deal with as a human (because we aren’t computers). The most comfortable code will be written in a high-level language (ultimately taken to extreme of “hey, grad student, write me a program to do X”), which is exactly not what runs on silicon.

        1. 4

          I think your confidence in compilers is perhaps misplaced.

          Now include python on the same plot, and the axes will stretch so far that GHC will look indistinguishable from GCC.

          the farther you get into the problem domain (and the more comfortable it is for you), the farther you move away from actual silicon running instructions. It’s not a false dichotomy.

          It’s only a true dichotomy if the human is better at telling the silicon how to implement the problem than the compiler is, which gets less true every day. It’s already the case that GCC will often beat hand-coded assembly when trying to solve the same problem. And my experience is that on real business-sized problems with ordinary levels of programmer skill and limited time available to produce an optimised solution, Haskell will often comfortably outperform C++.

          The best-performing code will be written in assembly, but it’ll be terrible to deal with as a human (because we aren’t computers).

          These days assembly is a long way away from reflecting what the actual silicon does. To first order the only thing that matters for performance these days is how well you’re using the cache hierarchy, and that’s not visible in assembly code; minor tweaks to your assembly can lead to radically different performance characteristics.

      3. 10

        Not to be mean, but this should be obvious to everybody.

        One of the reasons I stopped using Python and moved to Common Lisp is that most of the time CL runs ~15-20x faster than roughly equivalent Python code, and takes less time to develop.

        The great thing is that when I actually need performance I can use Common Lisps’ built-in disassembler and profiling tools to find hot spots and add type declarations and other optimizations to speed things up even more.

        1. 4

          What changed in my reasoning?

          First of all, I’m working on other problems. Whereas I used to do a lot of work that was very easy to map to numpy operations (which are fast as they use compiled code), now I write a lot of code which is not straight numerics. And, then, if I have to write it in standard Python, it is slow as molasses. I don’t mean slower in the sense of “wait a couple of seconds”, I mean “wait several hours instead of 2 minutes.”

          So, basically, the author is solving problems Python isn’t good at. So, great - use another tool, Haskell or whatever. I do not see how this says anything useful or interesting about the language itself other than “Python is not optimized for solving numerical problems not addressed with numpy”

          1. 2

            Is Haskell well-optimized for numerical problems?

            1. 8

              It’s OK. The mainline compiler doesn’t have vectorization by default yet. See https://ghc.haskell.org/trac/ghc/wiki/SIMD/Implementation/Status . For some classes of numerical algorithms, you can expect performance on par with (unvectorized) C. For algorithms that are inherently mutation heavy, I generally find that making Haskell exactly as efficient as C removes many of the benefits of using Haskell in the first place. That’s fine for library writers but not great for end users.

              Haskell’s main strength wrt speed is that you can compose high-level things and the abstraction overhead will be unreasonably small. If you’re churning through gigabytes of data per second, you can write Haskell that’s almost as fast as really well optimized C for a small fraction of the effort. However, I wouldn’t really describe the problems this works well on as “numerical”. When I think of “numerical” I usually thing of lots of mutations on big matrices, for which I would rather use Numpy or something. Haskell’s good for many of the things numpy isn’t.

              1. 2

                I may be wrong but my understanding was that haskell does have library support for generalized stream fusion which gets very good performance without having to write particularly clever code.

                https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/haskell-beats-C.pdf?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Fsimonpj%2Fpapers%2Fndp%2Fhaskell-beats-c.pdf

                1. 1

                  That’s exactly what I meant by

                  you can compose high-level things and the abstraction overhead will be unreasonably small

              2. 1

                Excellent question - I have zero idea. It’s compiled, right? So I’d think there’s more room for optimization there, but I dunno.

            2. 2

              Totally agree with the author. I was also using Python for too many things. Nowadays it’s Go for most things, Python for simple scripts, Node for some other stuff (running a test suite against websockets is much easier in Go than most other languages, also scraping is pretty awesome in Node)

              1. 2

                r. I was also using Python for too many things. Nowadays it’s Go for most things, Python for simple scripts, Node for some other stuff (running a test suite against websockets is much easier in Go than most other languages, also scraping is pretty awesome in Node)

                Python is more ergonomic for some CRUD stuff when you can just use django, other than that I prefer Go.

              2. 1

                The majority of our code is in Python, but the performance-critical stuff is in C that we glue together using Python. So far so good.

                I’ve got it in the back of my mind that we’re going to migrate most of the stack to Go in the Mysterious Future, except for the parts that require very fine-grained control over allocation lifetimes and memory layout, which would still be in C.