As a relatively-new Software Engineer coming from the world of Pure Math, Code Comprehension has become the single most important metric to my personal satisfaction and productivity. These are my thoughts on this subject. I’d love to hear yours.
On the whole I’d agree with the article’s main point that comprehensibility is a very important quality and that, all else being equal, static typing tends to lead to greater comprehensibility than does dynamic typing. That said, a bit of armchair critique…
Firstly, it only seems to address the matter comprehensibility at function boundaries. While that’s nice to have, I’d argue that intra-function comprehensibility is also quite important (perhaps even more so).
For reasoning about inter-function behavior, Haskell’s type system is certainly helpful (assuming you can decipher the type signatures, which can be quite a task at times). For other aspects of comprehensibility though, Haskell-as-exemplar seems like an odd choice to me, though that may be mostly a matter of style than the language itself. Admittedly I’m not terribly experienced with Haskell, but the authors of Haskell I’ve seen “in the wild” seem quite fond of sprinkling their code with lots of custom-defined operators – and as a reader, the meaning of $>==>$ in the middle of a function isn’t always so obvious.
I think the assertion that
the imperative nature of these languages requires you to perform side-effects in an uncontrolled manner
is rather overstated. It is certainly possible to write pure, non-side-effecting functions in $IMPERATIVELANG if you want to; I’d say it’s probably even common.
I can get very close to understanding everything about a method simply by looking at it’s type signature.
strikes me as unrealistic beyond very simple functions – or even with plenty of simple ones, for that matter. Looking only at the type signature, what does a function of type (Numeric a) => a -> a -> a do? And (to be somewhat pedantic) if it involves division somewhere, the “divisor != 0” constraint does seem like a bit of a “magic dependency…not specified in the type signature”.
(Numeric a) => a -> a -> a
(And as a minor nitpick, the practice of capitalizing certain Semi-Arbitrary bits of Terminology always strikes me as coming off a little pompous.)
I appreciate the feedback. I am glad to be challenged on these points as they help me think about the problem!
I would argue that Haskell’s purity helps the most when looking outside the function boundaries. Specifically because each function provides exactly the context it requires within the type signature, tracing through various functions becomes much easier. Immutability also helps here. When you throw concurrency in the mix, then logic becomes difficult. But this will be case in any language. And,a gain, the immutability and purity of Haskell will help in all of these cases.
At a very holistic and more quantifiable level, what I’m arguing is that by restricting the set of possible actions via static typing and purity, you restrict the set of possible outcomes. Meaning the human mind does not need to comprehend as many possibilities. And this applies beyond the function boundary.
Agreed. However, you can ignore the symbolic operator and just look at the type. We can argue day and night about symbolic operators versus names, and I will agree that names are more “comprehendable,” but I’d also counter that you rarely “take a function at named value” (if you will).
I disagree here. I write java at work often (we use Apache Storm) and writing “pure, immutable” code is made very difficult. The lack of Algebraic Data Types means you have to lock basic data into classes with getters and setters. It means mutating data via shared-structuring is incredibly difficult, so you end up just mutating things. The lack of lazy evaluation means the ordering of my imperative statements matter, which means the order of my effects matter. I could go on, but even doing trivial things that don’t require sophisticated type systems is not easy. Just read that Concurrent Java book!
Certainly, favouring immutable data structures is prevelant, but lacking the simplicity of Algebraic Data Types and combinators makes anything beyond “immutability” difficult.
Here is what I can tell you about (Numeric a) => a -> a -> a:
Contrast this with the method: def numer_stuff(x,y) in Ruby. We are hopeless. Now: Numeric numberStuff(Numeric a1, Numeric a2) in Java gets us most of the way there, but, this function may mutate a1, mutate a2, or perofrm some side-effect (logging etc.)
Numeric numberStuff(Numeric a1, Numeric a2)
It’s not a big difference, but whe you are tracing through 10+ functions/methods, those little assumptions add up!
I think this is a product of me writing this over the course of a couple of days. But, I agree.
Ultimately, I think the case against Dynamic Typing is pretty clear w.r.t. comprehension. However, the case against non-pure languages is more subtle, and is one of those “death by a thousand cuts.” There is no single, clear victory with Haskell or languages like it. However, the experience of understanding and modifying other people’s code becomes much less stressful when several little things disappear. Or, at least this has been my experience, which may be a product of my unique background (mathematics) and immaturity as an engineer.
Thanks for the discussion!
I wonder how any haskell would help to spot the difference between
however good comment/ variable function namming would help