Last year I had a short Minizinc gig. Mz is a constraint solving language that, for its problem domain, is even more declarative than haskell or prolog. The client had a beautiful spec that captured his complex problem in just a few dozen lines of Mz.
The problem was performance. Solving for 26 params took a few seconds. Solving for 30 took ten minutes. Solving for 34 took days. He needed it to solve a 40 param input in under an hour.
I kinda feel like it’s easier to control algorithmic code’s correctness than declarative code’s performance now.
There are declarative languages that work very well for some problem space. SQL is mostly declarative, so are Mathematica and AMPL. If you apply these tools to problems for which they are not suited, they don’t work well. The problem with Haskell, as far as I can tell is that instead of just admitting that it needed an imperative mode, the designers came up with an elaborate excuse and claimed to have not compromised.
I think most people would agree with you, but would differ on where the “obfuscatory category theory nomenclature” part stops and the “relatively simple” part begins.
It’s always been a problem. If something is declarative, always watch out for and/or warn of that by default. I’m not sure how fundamental it is. Too much optimization put into imperative compilers and even verification with stuff like Why3 for a fair comparison. It’s worth further research if not already done.
Declarative usually trades an intuitive understanding and control of performance for another black box that does the work for developers without knowing context code operates in. Performance and predictability might go down every time that happens.
Ironically, it should be easier to analyze and optimize declarative code. But since it’s more difficult to understand a compiler and/or runtime, and the performance of a declarative program depends on those, it takes more effort to speed up a program.
On the flip side, adding optimization to a compiler or runtime improves performance for all programs suffering from the same slowdown.
As someone who has occasionally played with Haskell for years and is finally considering to use it for larger projects, this post concerns me. The complexity of monad stacks is a little scary, but I figure the type system makes it manageable. However, if it’s true that monad transformers end up being a source of memory leaks then I’m back to thinking Haskell should only be used for larger, production-level projects by those knowledgeable of GHC internals and edge-case language tricks to hack around inherent problems.
Can someone with experience comment on the author’s claims? They do seem weak when no specific examples of memory leaks (or abstraction leaks) are provided.
Do not use StateT or WriterT for a long running computation. Using ReaderT Context IO is safe. You can stash an IORef or two in your Context.
Every custom Monad (or Applicative) should address a concern. For example a web request handler should provide some means for logging, to dissect request, query domain model and prepare response. Clearly a case for ReaderT Env IO.
Form data parser should only access form definition and form data and since it’s short lived, it can be simplified greatly with use of ReaderT Form stacked with StateT FormData. And so on.
I now started to go toward the simpler route of the Handler Pattern I pointed out. And, in the end, I tend to prefer that style that is very slightly more manual but more explicit.
One answer would be to say you leave the monadic stuff at a ReaderT Context IO to deal with the outside world, writing simple non-declarative code to tie together all your pure declarative machinery. Anything wrong with that?
Last year I had a short Minizinc gig. Mz is a constraint solving language that, for its problem domain, is even more declarative than haskell or prolog. The client had a beautiful spec that captured his complex problem in just a few dozen lines of Mz.
The problem was performance. Solving for 26 params took a few seconds. Solving for 30 took ten minutes. Solving for 34 took days. He needed it to solve a 40 param input in under an hour.
I kinda feel like it’s easier to control algorithmic code’s correctness than declarative code’s performance now.
There are declarative languages that work very well for some problem space. SQL is mostly declarative, so are Mathematica and AMPL. If you apply these tools to problems for which they are not suited, they don’t work well. The problem with Haskell, as far as I can tell is that instead of just admitting that it needed an imperative mode, the designers came up with an elaborate excuse and claimed to have not compromised.
Haskell has several “imperative modes”: IO, ST, and STM come to mind. All of them work superbly.
I didn’t comment on how well they work, only on the layer of obfuscatory category theory nomenclature pasted on top of something relatively simple.
I think most people would agree with you, but would differ on where the “obfuscatory category theory nomenclature” part stops and the “relatively simple” part begins.
It’s always been a problem. If something is declarative, always watch out for and/or warn of that by default. I’m not sure how fundamental it is. Too much optimization put into imperative compilers and even verification with stuff like Why3 for a fair comparison. It’s worth further research if not already done.
Declarative usually trades an intuitive understanding and control of performance for another black box that does the work for developers without knowing context code operates in. Performance and predictability might go down every time that happens.
Ironically, it should be easier to analyze and optimize declarative code. But since it’s more difficult to understand a compiler and/or runtime, and the performance of a declarative program depends on those, it takes more effort to speed up a program.
On the flip side, adding optimization to a compiler or runtime improves performance for all programs suffering from the same slowdown.
SQL optimizers are hard to beat.
As someone who has occasionally played with Haskell for years and is finally considering to use it for larger projects, this post concerns me. The complexity of monad stacks is a little scary, but I figure the type system makes it manageable. However, if it’s true that monad transformers end up being a source of memory leaks then I’m back to thinking Haskell should only be used for larger, production-level projects by those knowledgeable of GHC internals and edge-case language tricks to hack around inherent problems.
Can someone with experience comment on the author’s claims? They do seem weak when no specific examples of memory leaks (or abstraction leaks) are provided.
Do not use
StateT
orWriterT
for a long running computation. UsingReaderT Context IO
is safe. You can stash anIORef
or two in yourContext
.Every custom
Monad
(orApplicative
) should address a concern. For example a web request handler should provide some means for logging, to dissect request, query domain model and prepare response. Clearly a case forReaderT Env IO
.Form data parser should only access form definition and form data and since it’s short lived, it can be simplified greatly with use of
ReaderT Form
stacked withStateT FormData
. And so on.https://www.fpcomplete.com/blog/2017/06/readert-design-pattern
Yes it is know to never use RWST+ with a Writer Monad in the stack because of space leak.
The choices about big-code organisation in Haskell is large. You have:
I used MTL-style to make a bot with long living states and logs (using https://hackage.haskell.org/package/logging-effect). It works perfectly fine for many days (weeks ?) without any space leak.
I now started to go toward the simpler route of the Handler Pattern I pointed out. And, in the end, I tend to prefer that style that is very slightly more manual but more explicit.
One answer would be to say you leave the monadic stuff at a
ReaderT Context IO
to deal with the outside world, writing simple non-declarative code to tie together all your pure declarative machinery. Anything wrong with that?The comments on this link are especially interesting.
I’m just learning monad transformers, I hope I understand the controversy about free and freer soon.
If you’re getting into architecture in Haskell I’d recommend “Three Layer Haskell Cake” by Matt Parsons too.