1. 12
  1.  

  2. 7

    Laziness is neat. But just not worth it. It makes debugging harder and makes reasoning about code harder. It was the one change in python 2->3 that I truly hate. I wish there was an eager-evaluating Haskell. At least in Haskell, due to monadic io, laziness is at least tolerable and not leaving you with tricky bugs (as trying to consume an iterator in python twice).

    1. 6

      I had a much longer reply written out but my browser crashed towards the end (get your shit together, Apple) so here’s the abridged version:

      • Lazy debugging is only harder if your debugging approach is “printfs everywhere”. Haskell does actually allow this, but strongly discourages it to great societal benefit.

      • Laziness by default forced Haskellers to never have used the strict-sequencing-as-IO hack that strict functional languages mostly fell victim to, again to great societal benefit. The result is code that’s almost always more referentially transparent, leading to vastly easier testing, easier composition, and fewer bugs in the first place.

      • It’s impossible to appreciate laziness if your primary exposure to it is the piecemeal, inconsistent, and opaque laziness sprinkled in a few places in python3.

      • You almost never need IO to deal with laziness and its effects. The fact that you are conflating the two suggests that you may have a bit of a wrong idea about how laziness works in practice.

      • Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

      1. 1

        Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

        I am not quite sure whether this is really evidence. I actually never tried to switch it on. Iwonder whether that option plays nicely with existing libraries, I gues not many are tested for not depending on lazy-evaluation for efficient evaluation. If you use Haskell and Hackage, I guess you are bound with rolling with the default.

        1. 2

          It works on a per-module basis. All your modules will be compiled with strict semantics, and any libraries will be compiled with the semantics they chose.

      2. 3

        Idris has strict evaluation. It also has dependent types, which are amazing, but strict evaluation is a pretty good perk too.

        1. 2

          I thought there were annotations for strictness in Haskell.

          1. 3

            yes, but I consider it to be the wrong default. I’d prefer having an annotation for lazy evaluation. I just remember too many cases where I have been bitten by lazy evaluation behaviour. It makes code so much more complicated to reason about.

            1. 1

              Do you happen to remember more detail? I enjoy writing Haskell, but I don’t have a strong opinion on laziness. I’ve seen some benefits and rarely been bitten, so I’d like to know more.

              1. 1

                I only have vague memories to be honest. Pretty sure some where errors due to non-total functions, which I then started to avoid using a prelude that only uses total ones. But when these occured, it was hard to exactly find the code path that provoked it. Or rather: harder than it should be.

                Then, from the tooling side I started using Intero (or vim intero). (see https://github.com/commercialhaskell/intero/issues/84#issuecomment-353744900). Fairly certain that this is hard to debug because of laziness. In this thread there are a few names reporting this problem that are experienced haskell devs, so I’d consider this evidence that laziness is not only an issue to beginners that haven’t yet understood haskell.

                PS: Side remark, although I enjoy haskell, it is kind of tiring that the haskell community seems to conveniently shift between “Anyone can understand monads and write Haskell” and “If it doesn’t work for you, you aren’t experienced enough”.

          2. 2

            Eager-evaluating Haskell? At a high level, Ocaml is (more or less) an example of that.

            It has a sweet point between high abstraction but also high mechanical sympathy. That’s a big reason why Ocaml has quite good performance despite a relatively simple optimizing compiler. As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

            Haskell has paid a high price for default laziness.

            1. 2

              As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

              That was used to good effect by Esterel when they did source-to-object code verification of their code generator for aerospace. I can’t find that paper right now for some reason. I did find this one on the overall project.

              1. 1

                Yes, however I would like to have Typeclasses and Monads I guess, that’s not OCaml’s playing field

                1. 1

                  OCaml should Someday™ get modular implicits, which should provide some of the same niceties as typeclasses.

                  1. 1

                    OCaml has monads so I’m really not sure what you mean by this. Typeclasses are a big convenience but as F# has shown are by no means required for statically typed functional programming. You can get close by abusing a language feature or two but you’re better off just using existing language features to accomplish the same end that typeclases provide. I do think F# is working on adding typeclasses and I think the struggle is of course interoperability with .Net, but here’s an abudantly long github issue on the topic. https://github.com/fsharp/fslang-suggestions/issues/243

                  2. 1

                    F# an open source (MIT) sister language is currently beating or matching OCaml in the for fun benchmarks :). Admittedly that’s almost entirely due to the ease of parallel in F#.
                    https://benchmarksgame.alioth.debian.org/u64q/fsharp.html

                  3. 1

                    Doesn’t lazy io make your program even more inscrutable?

                    1. 1

                      well, Haskell’s type system makes you aware of many side-effects, so it is a better situation than in, for example, Python.

                      Again, I still prefer eager evaluation as a default, and lazy evaluation as an opt-in.

                    2. 1

                      Purescript is very close to what you want then - it’s basically “Haskell with less warts, and also strict” - strict mainly so that they can output clean JavaScript without a runtime.

                    3. 4

                      Good article. The way I like to describe the modularity benefit is that laziness allows you to make something general and impose the edges later. That general thing that you make is often easier to understand because it doesn’t have special cases.

                      A simple example: When I was working on a vi clone in Haskell I made a screen buffer data structure that was simply my line buffer (a list of lines) with an infinite list of tildes appended to it. From that point on, rendering was easy no matter what the size of the window was in relation to the size of the original line buffer. You just did a take of the number of lines you needed for the window. If I hadn’t done that, the rendering code would’ve mixed buffer size, window size, and tilde padding concerns.

                      1. 3

                        “It would be difficult to reproduce this in a strict language because how could you write a function that produces all natural numbers without looping forever?”

                        Of course there are multiple ways of doing this in struct languages. The expressiveness of the solution depends on the capabilities of the specific language. Whether it’s better to be strict by default it lazy by default seems to be a matter of preference.

                        1. 3

                          Unless you need to know time or space behavior ahead of time. Example would be real-time segment or supressing covert channels. So far, strict and low-level languages seem to be inherently easier to check for that .

                          1. 2

                            Yes, agreed there are certain situations like these that would lead more to strict evaluation. The general case is closer to a toss up.

                          2. 3

                            I agree it’s possible in a strict language, but I encourage you to keep reading the article. Java has Iterator, Scala has Stream, Python has generators, etc. The point I make in the article is that approximating a lazy list with an Iterator (or Stream, or generator) is less natural and incurs its own complexities.

                            1. 4

                              I disagree with the statement I originally quoted, above. I don’t believe anything in the article justifies that claim. As I wrote, in some strict languages the expressiveness may be cumbersome. But that is due to the specifics of those languages, not due to strict evaluation per se.

                              For example any language with a reasonable macro capability is going to accommodate explicit lazy evaluation pretty well. See for example…

                              https://srfi.schemers.org/srfi-45/srfi-45.html

                          3. 2

                            As a counterpoint, there was recently an article on lobsters about doing advent of code in haskell. In it @rbaron mentioned several challenges caused by lazy evaluation he had to work around.

                            1. 5

                              From my experience, that’s culture instead of difficulty. Most programmers expect certain behavior, we’re just accustomed to things being a certain way. Because of that, learning to code in a lazy language is a good brain stretching exercise.

                              1. 1

                                I beg to differ, there may be some cases where indeed an experienced haskeller will not experience a penalty because Haskell is lazy by default (as compared to let’s say OCaml). However, I have seen my fair share of Haskell programs with space leaks and other issues that weren’t fixed, because the root cause was not found.

                            2. 2

                              At around 4:30pm this friday afternoon, before I left work, I’ve decided to improve our scheduler, so that a certain number of activities would be performed over a given time window, so without writing much new code I’ve achieved it with the following pseudocode:

                              timePassed now =
                              let issueImportantActions = ... -- a possibly very long list of important things to do
                                  issueUnimportantActions = ... -- a possibly very long list of less important things to do
                                  numOfRecentActions = length $ takeWhile (> now - timeWindow) [everything that ever happened]
                               in sequence 
                                    $ take (allowedActionsPerWindow - numOfRecentActions)
                                    $ filter canDoNow
                                    $ issueImportantActions ++ issueUnimportantActions
                              

                              Got the code to compile and committed it without trying, then left for the weekend on time. I got curious and checked, it works flawlessly on my personal development deployment. Try to see how many places I take advantage of laziness!