I really liked how this article explains the lazy evaluation step by step, first with graphical visualizations of the reduction graph and later by introducing the textual representation. I went through Real World Haskell which has as well an explication of lazy evaluation, but for me it stayed unclear until this article.
Finally at the end of this article: “In fact, I would go so far as to say that with lazy evaluation, it is no longer feasible to trace evaluation in detail, except for very simple examples. Thus, analyzing the space usage of a Haskell program can be hard. My advice would be to only act when your program really does have a significant space leak, in which case I recommend profiling tools to find out what’s going on.”
For me this is the ugly area of Haskell programming. On one hand Haskell is often promoted with “when it compiles it mostly works”, implying that you almost do not need testing, or at least not as extensively as in other languages, on the other hand it is really difficult to find out the space usage of Haskell programs without extensive testing. For me this is as crucial as space leaks in C, but is often dismissed, or should I say waived over, by the Haskell community. I really love Haskell, and I think we need better tools/methods to deterministically be able to predict the space usage of a Haskell program, without heavy profiling.
Totally agree about deterministically predicting space usage. I’m very excited about Idris and it’s research towards being able to guarantee space usage via the type system.
However, while Idris is still brewing and being experimented with I very much hope for tooling to be developed in the areas of space usage. If I can understand it myself, perhaps I’ll be able to contribute some tools!