After years of dealing with laziness, I’m dubious that its benefits sufficiently outweigh its detriments, especially in mixed eager/lazy languages like Clojure.
It’s one of those things that seems elegant and handy at first (infinite sequences! don’t worry about a file’s size! only pay for what you use!), but comes with hidden detriments too (overhead! harder debugging! escaped dead references!)
Maybe a wholly-lazy language would change my mind.
I think I’ve also heard it referred to as “time leak” because when you do fully evaluate a thunk which has millions of others behind it, it often manifests as some trivial constant time operation (like head someList) taking linear time to execute.
Ran into this today at work. It’s a real time waster because you can’t get any info, it just fails and when you try to insert debugging print statements on the offending sequence, it will just crash with no useful output where you inserted the print.
After years of dealing with laziness, I’m dubious that its benefits sufficiently outweigh its detriments, especially in mixed eager/lazy languages like Clojure.
It’s one of those things that seems elegant and handy at first (infinite sequences! don’t worry about a file’s size! only pay for what you use!), but comes with hidden detriments too (overhead! harder debugging! escaped dead references!)
Maybe a wholly-lazy language would change my mind.
Haskell code occasionally runs into a similar problem that gets called “thunk leak”. http://blog.ezyang.com/2011/05/anatomy-of-a-thunk-leak/
I think I’ve also heard it referred to as “time leak” because when you do fully evaluate a thunk which has millions of others behind it, it often manifests as some trivial constant time operation (like
head someList) taking linear time to execute.Ran into this today at work. It’s a real time waster because you can’t get any info, it just fails and when you try to insert debugging print statements on the offending sequence, it will just crash with no useful output where you inserted the print.