1. 25

An old usenet article but it seems like there hasn’t been a lot of change in this regard.

  1.  

  2. 9

    writing some clojure recently impressed me with a similar revelation. the language rewards you for breaking things into thin functions - function boundaries let you take advantage of destructuring and the syntax is becomes much easier to manage. then, once your code is broken into individually simple functions reusability, abstraction, and refactoring opportunities fall out naturally!

    1. 7

      I have played with making longest-common-subsequence scanners in lisp forms - the theory being that the programmer could be (interactively) prompted to name these sequences and (effectively) compress their programs, but I’ve abandoned that effort in favour of simply making shorter programs: Human beings can better see patterns and almost patterns – and the latter are so much more common than the former.

      1. 5

        I really like this, and in every programming language I’ve used, shorter functions just LOOK neater. However, Python has had one bad influence on me: “Function calls are expensive” This thing is like the grim reaper over my shoulder. In every language I program in, this shiny scythe peeks over my shoulder and makes my life miserable. It’s irrational for two reasons

        1. Premature optimization
        2. Only valid for Python.

        But it is what it is. I have to unlearn it.

        1. [Comment removed by author]

          1. 3

            I don’t have a precise answer at hand, but the hand-wavy rule is that a function - especially in an inner loop - should do some heavy lifting in order to justify it’s existence. Here’s a very simple illustration:

            In [1]: def add(x, y):
               ...:     return x + y
               ...: 
            
            In [2]: def main():
               ...:     a = 0
               ...:     for x, y in zip(range(10000), range(10000)):
               ...:         a += x + y
               ...:     return a
               ...: 
            
            In [3]: main()
            Out[3]: 99990000
            
            In [4]: def main_func():
               ...:     a = 0
               ...:     for x, y in zip(range(10000), range(10000)):
               ...:         a += add(x, y)
               ...:     return a
               ...: 
            
            In [5]: main_func()
            Out[5]: 99990000
            
            In [6]: %timeit main()
            1000 loops, best of 3: 1.42 ms per loop
            
            In [7]: %timeit main_func()
            100 loops, best of 3: 2.67 ms per loop
            
            1. 3

              Which is why you should always profile and find where your hotspots are. Does the <50% performance difference here really matter? It’s impossible to say without some profile data on real-world workflows.

          2. 5

            “Function calls are expensive” is one of the reasons why I really desire to see PyPy displace CPython in wide usage. There are a variety of distasteful performance hacks that I’ve seen in old code targeting CPython to work around performance deficiencies that exist only in the CPython interpreter, all of which become simply unnecessary under PyPy.

            For example, referring to global variables in CPython is noticeably slower than to local variables (by a couple hundred ns or so, enough to show up if it’s in an inner loop). As a result, I’ve seen production code like:

            def foo(x, _GLOBAL_TABLE=GLOBAL_TABLE):
                y=_GLOBAL_TABLE[x]
                # more code...
            

            instead of the straightforward:

            def foo(x):
                y=GLOBAL_TABLE[x]
                # more code...
            

            which accidentally pollutes the interface of the function just because the first is slightly faster under CPython. On the other hand, they are not distinguishable under PyPy when the JIT warms up. ❤