Summary. The linked text is just a rant, didn’t address real practical problems in the Haskell language and complains about the wrong subjects, in my opinion.
The linked text infuriates me. And it is so because it is uninformative and unnecessarily spreads fear, uncertainty, doubt with no back-up data or references.
It can be read as a rant, at most, and as so it should be tagged, in my opinion.
But I want to address what I find wrong in this text.
IO problem. Besides the fact that main = putStrLn "Hello, World!" is a simpler “Hello, World!” program than its C counterpart, what I learned is that Haskell forces the programmer to think about, design and implement data transformation first, then process the input or output of it. In Java-esque language, one concerns about the business logic, then moves to the user interaction part (which will be encapsulated in IO monad).
When I wrote a compiler, interpreter and simulator (for a metabolic language) in Haskell, I learned that lesson smoothly because most of my design models were already mathematical, so I was (theoretically) mainly focused on the “business logic”. Later, when I had to add the HTML5 user interface, it was actually easy, and I could never feel this IO problem with my rustic MVC design.
Monoid abstraction. I am not kidding, but the 5th-grade children in my homeland learn what a monoid is, so I think any other programmer can as well.Of course they don’t learn the name “monoid” (or “group”, in the mathematical sense), but they do learn all the other nomenclature: set, binary operation, identity element, associativity. And they have exam on the subject, in which they have to recognize which pairs of sets and binary operations forms a monoid or not.
Do I come from a genius land? Certainly not!
The real problem is not how difficult is to understand the monoid abstraction, but the semantics it contains after all the social media complaint about Haskell, monad and its sordid definition as “a monad is just a monoid in the category of endofunctors”. If monoid would be presented as the Clock Arithmetic design pattern in a Gang of Four book, I think most people would accept it. (Well, most people accept the monoidal properties of Promise and Future in Java without naming it monoid or even monad.)
My opinion is that a lot of programmers are “mentally lazy” and want an easy, served-on-a-plate simple solution for their problem instead of reflecting a little on concepts—design patterns are great examples of obvious instances of applied object-oriented concepts which most people can just “go to the shelf and take the canned solution” if they don’t want to design their own architecture using the basic set of object-oriented concepts.
“Not made for this world.” I think the correct term would be “ahead of its time”. The syntax is not ALGOL-like (but mathematical); the language stimulates reasoning on data and, consequently, use of types; it also stimulates determinism (via pure functions); on the other hand, it makes non-determinism easy (top of my head, Applicative and lists are nice to fool around with it); the easiness of polymorphism (compare it with C++ templates) and type classes; lazy evaluation; several styles for concurrency and parallelism; and so on…
The list of features that makes Haskell ahead of its time (of course it is not the unique language in this group) is incredible!
But when we are stuck with strict-subset languages of the 1970s (yes, most of our dear programming languages are a strict subset of ALGOL 68), then we tend to think any breakthrough language is not made for this world.
Uff! I took out of my system.
However, I want to briefly point out some of my practical criticisms towards Haskell, which I (as a not-very-experienced Haskell programmer) think the article should have addressed.
Memory consumption. Do you remember the compiler/interpreter/simulator I told you before? Yeah… To translate a couple of rules to a primitive assembly, my application consumed 5GB of my working memory. And it is not because I am a lousy programmer (could be, though), but because the “standard way of doing it” (no advanced Haskell trickery) tends to generate memory consuming programs, in my experience.
Debugging. I am fan of debugging. I do use GDB almost daily, both to understand calls, as well as, to debug buggy applications.
I still don’t know how to do so in Haskell, although I haven’t dive into it.
Performance. Haskell is fairly good at it, I must admit: it is comparable to Java and Go (given it is comparable to Java). But I think it has the potential to get closer to Rust—so I can have some chance to convince my employer to start using it.
Retrievability and documentation. The Haskell Wiki is a gem, in my opinion: almost any subject concerning programming languages (deliberately exaggerated comment), particularly Haskell, in in there. The problem is to find it (when one wants or needs it).
Books? The landscape has been changing (thank you, @bitemyapp), but compared to Rust (and it is impressive free book, thanks @steveklabnik), for instance, it is light-years behind.
And to find the correct (maintained) library for common tasks is not easy, but there are initiatives on the way.
Reputation.“The Haskell Pyramid” is a real problem, in my opinion. Maybe one of the biggest problems in Haskell.
It is obvious that Haskell isn’t perfect and lacks tools for “practical, industrial application”. Unfortunately, those were not addressed in the unique paragraph (out of eight) in that text.
P.S.: it seems that <br/> doesn’t work in item lists in this Markdown.
P.P.S.: I didn’t mean to say the Rust book(s) are better than “Haskell Programming from First Principles” (@bitemyapp, I am a fan of your book and I cannot recommend it enough). What I meant is that the (series of) Rust book(s) are part of the documentation Rust documentation available (for free) in the Rust’s website and it is easy to find it, read it and understand Rust from it. In Haskell, the closest equivalents are the Haskell Wiki and the Haskell Wikibook, which isn’t as maintaned as the Rust book.
Far as memory, performance, and predictability, you might find Habit interesting. I think it stalled after they got poached for bigger projects. It’s still there to give ideas and inspiration to whoever wants to try it next.
Thank you for your link: I will take a detailed look at it.
From the language’s report, I think you are spot-on:
This report presents a preliminary design for the programming language Habit,a dialect of Haskell [14] that supports the development of high quality systems software.
I guess Habit has major focus on these features, but @Leonidashas nicely answered me in another thread in which he suggests that OCaml also addresses these features better than Haskell. What do you think?
Haskel tries to be purely functional as much as possible. Ocaml is more flexible with several paradigms supported. It might be easier to do imperative stuff just for that reason. Haskell’s consistency might have advantages, too. Probably depends on your needs.
I know Jane St is all over Ocaml, including improving it. One weakness I know of Ocaml is concurrency. Haskell has better options. Ocaml usage seems to lean more toward mainstream programming, maybe making it easier to learn. Whereas, Haskell gets into mathematical concepts that might give you more new ways of thinking on top of FP in general.
That’s a simple demonstration of what is so annoying about the Haskell community. Do you honestly believe that Algol was created without mathematics? Do you think e.g. Alan Perlis and John McCarthy, two members of the ALGOL committee, lacked the deep mathematical knowledge required to understand monoids? Knuth doesn’t know or use mathematics so the poor fellow had to hack things up in MIX assembler? What about Alan Turing? There is nothing more imperative and side effect plagued than a state machine with an infinite storage unit attached. Poor Dennis Ritchie, with his Ph.D. on the Grzegorczyk hierarchy probably didn’t understand the definition of a function so he had to hack up something like C? Is that your theory? People who are not on the Haskell bus are just too lazy and innumerate to appreciate the depths of algebra needed to understand what a monoid is?
Here, I have a question for you. Function composition is associative and the identity map is trivial, so why do you need to add a monoidal structuring to Haskell if it’s about “pure” functions? The answer, to me, is that Haskell is just embrarrassed by its statefulness, there is an inherent notion of state in the structure of the program text even without all the monad nonsense because it’s not at all convenient to write a program as a single expression using function composition as the only connective e.g. a single lambda-calculus expression. Of course, contrary to ideology, this is also true of most mathematical texts even the most non-algorithmic. When you have definitions “let G be a non trivial cyclic group” and then later “Let G be the monster group”, those indicate a change of state. x is not always the same value in a mathematics textbook! Try explaining Euler’s method without some notion of step. So “pure” Haskell is, of course, statefull but shamefaced about it. And then you realize that, oh shit, the dim Algol committee maybe was not so stupid and to do complex things we need to be able to specify both state and evaluation order so you want to add more complicated imperative structures, but still retain the illusion of being in a pure function space - viola, use the endofunctor to smuggle in state variables and a fancy composition rule to compose associatively while properly connecting all the state variables. Ta da!
I usually abstain myself from this kind of comments, but I think it is important to clarify some things.
I think your comment is unnecessarily off-tone and pretentious.
From a single quote from my comment, you have inferred a lot about the Haskell community and myself.
When I stated that “[t]he syntax in not ALGOL-like (but mathematical)”, I meant it is based on the mathematical notation and not on natural language’s prose. It is not exclusive to Haskell: OCaml, SML and others have the same kind of syntax. And that is deliberated.
I am aware of the people in the committee of the ALGOL language. And most of them were mathematicians (van Wijngaarden, Dijkstra and Hoare are some examples that I know by heart), so I am personally sure that they were aware of what a monoid is, particularly when 11-year children are aware of it.
By the way, the members I cited above introduced more mathematical formalism to the ALGOL-68 they designed or implemented: W-grammar, recursion and ranges check (via Hoare logic expressions), respectively.
But it doesn’t exclude the fact that ALGOL has a syntax similar to prose instead of mathematical notation.
I don’t think that “[p]eople who are not on the Haskell bus are just too lazy and innumerate to appreciate the depths of algebra needed to understand what a monoid is”, but I think that most programmers are.
In some fields somehow related to computing, e.g. electronic engineering, to know advanced calculus is a pre-requisite to make the simplest products (for instance, Laplace transform to solve integro-differential equations).
It is not that they really need to know it, but since it is the basic concept to reach their solution, they learn and internalise it.
It might seem different in programming, but it is not. Associative operations are widespread in algorithms, and to know what a monoid is incredibly helps to compose better solutions. And it is not a difficult concept, given that 11-year students learn it!
But I see a lot of programmers complaining about learning few mathematical concepts (actually, it is a matter of just learning the naming), but learning the shiny new framework every six months.
To me, personally, it is just shallow.
“Why do you need to add a monoidal structuring to Haskell if it’s about “pure” functions”?
I am not sure whether I am the right person to answer you this, but the Monoid type class is there for convenience: it is just, in OOP terms, “an interface”. (It is a type class, actually.)
The reason it is there it to facilitate code-reuse: all types that derive Monoid have the mappend function implementation, also known as <> “operator”.
I am not aware of a single person that denies the existence of state in Haskell. As fas as I am concerned, Monad is a way to model those states in the language so that it fits in the type-checker and the “purism” of the language.
“Algol committee maybe was not so stupid and to do complex things we need to be able to specify both state and evaluation order […]”.
The need is conjectured to not exist by the Turing-Church thesis; as I understand it, the Turing-complete models are equivalent to the sequential Turing model, but λ-calculus doesn’t require so.
Still, I you made me think on reason Haskell has the Monoid type class, which was a useful exercise and I hope we can keep a civil discussion.
When I stated that “[t]he syntax in not ALGOL-like (but mathematical)”, I meant it is based on the mathematical notation and not on natural language’s prose. It is not exclusive to Haskell: OCaml, SML and others have the same kind of syntax. And that is deliberated.
ALGOL is not based on natural language at all.
FORTRAN is definitely not based on natural language - Formula Translation. C looks like recursive functions.
But it doesn’t exclude the fact that ALGOL has a syntax similar to prose instead of mathematical notation.
No. The main constraint on ALGOL surface syntax was that it had to be typed on teletype machines or even punched cards.
It might seem different in programming, but it is not. Associative operations are widespread in algorithms, and to know what a monoid is incredibly helps to compose better solutions. And it is not a difficult concept, given that 11-year students learn it!
Here you combine a condescending approach with a claim “incredibly helps” that seems completely wrong to me.
Of course, even in US elementary schools, one learns about associative operations. How does it help in programming to call that “monoidal” ?
I think programmers should learn algorithmic analysis, state machines, combinatorics, linear algebra, …
I don’t see a need for abstract algeba in programming - it’s cool stuff, but ….
That is, programmers need mathematics, but they do not necessarily need elementary concepts of abstract algebra awkwardly glued onto a complicated programming language.
That’s such a great paper: very condensed and high level of information. But it certainly does not help the argument that ALGOL was based on natural languages - it’s clearly designed around arithmetic expression. Wikipedia has a good note on the typewritter constraints - pre ASCII!!!.
The ALGOLs were conceived at a time when character sets were diverse and evolving rapidly; also, the ALGOLs were defined so that only uppercase letters were required.
1960: IFIP – The Algol 60 language and report included several mathematical symbols which are available on modern computers and operating systems, but, unfortunately, were not supported on most computing systems at the time. For instance: ×, ÷, ≤, ≥, ≠, ¬, ∨, ∧, ⊂, ≡, ␣ and ⏨.
1961 September: ASCII – The ASCII character set, then in an early stage of development, had the \ (Back slash) character added to it in order to support ALGOL’s boolean operators /\ and /.[23]
1962: ALCOR – This character set included the unusual “᛭” runic cross[24] character for multiplication and the “⏨” Decimal Exponent Symbol[25] for floating point notation.[26][27][28]
1964: GOST – The 1964 Soviet standard GOST 10859 allowed the encoding of 4-bit, 5-bit, 6-bit and 7-bit characters in ALGOL.[29]
1968: The “Algol 68 Report” – used existing ALGOL characters, and further adopted →, ↓, ↑, □, ⌊, ⌈, ⎩, ⎧, ○, ⊥ and ¢ characters which can be found on the IBM 2741 keyboard with “golf-ball” print heads inserted (such as the APL golfball). These became available in the mid-1960s while ALGOL 68 was being drafted. The report was translated into Russian, German, French and Bulgarian, and allowed programming in languages with larger character sets, e.g. Cyrillic alphabet of the Soviet BESM-4. All ALGOL’s characters are also part of the Unicode standard and most of them are available in several popular fonts.
The great David Parnas once told me of a colleague that the colleagues highest mathematical accomplishment had been some skill using the symbol “golfball”.
The need is conjectured to not exist by the Turing-Church thesis; as I understand it, the Turing-complete models are equivalent to the sequential Turing model, but λ-calculus doesn’t require so.
It’s like the need to use positional notation instead of Roman numerals for arithemtic. In principle, Roman numerals suffice.
Which is why folks like Hoare embraced the nature of actual programs by creating mathematical ways of reasoning about imperative code. The tools with the most productive, practical use in industry follow that approach. Most use Why3 which addresses functional and imperative requirements. The idea being you use whatever fits best for your problem.
I may be simplifying it too much, please let me know if I am making a straw man out of this, but my interpretation of the argument is:
P1. IO is the interface towards users
P2. In order to be considered useful for user facing programs a language has to make IO easy
P3. Haskell makes IO difficult
Conclusion: Haskell is not useful for writing user facing programs
I would not discuss P1, we can take it for granted
P2 seems fairly arbitrary to me, without deeper reasoning, it seems to me that is a weak premise. I can imagine parallel ones that are quite obviously unsound. “Your feet are the most important part of your body since they are the ones interfacing with the ground”, “Sales is the most important department of any company as they are he ones bringing in the money”, etc.
P3 Is subjective, whether it is difficult or easy depends on how well you know the tool (haskell), and what are you trying to use it for (which problem). However, let’s just focus on a supporting premise for that one, picked from the tweet that is linked
In other words, “open file a, then open file b” is one of the harder programs to write in Haskell.
I am interpreting “harder” as most difficult, which again depends on who is doing it. It wasn’t really hard at all for me at least:
import System.IO
main :: IO ()
main = openFile "/tmp/foo" ReadMode >> openFile "/tmp/bar" ReadMode >> return ()
A complete beginner that hits the right tutorials would likely do something like this even before knowing that there is something called monad
import System.IO
main = do
f1 <- openFile "/tmp/foo" ReadMode
f2 <- openFile "/tmp/bar" ReadMode
return ()
And someone that really hates monads and would like them not to exist could write
import System.IO
main =
let ops = openFile <$> ["/tmp/foo", "/tmp/bar"] <*> [ReadMode]
in sequence_ ops
The last one is just to make the point that the relationship between monads and IO is weaker than people superficially claim, once you put some time into it.
Now, some people would have a more difficult time writing any of these, but now that you know, it is not that difficult, is it?
So, if you don’t want to use a particular language, that is fine. If you want to use a particular language, that is fine too. If you want to claim that a particular language is useless, you are starting from a very difficult to sustain position. People have written usable and useful software in virtually any language, including Brainfuck and Malbolge, both of them explicitly designed not to be of practical use.
Aside
I did a PhD in the type of math that necessitates a lot of category theory, and I have looked at your use of category theory, and judged it to be unnecessary and pretentious and mainly focused on making you look smart while being entirely trivial. But this is not that kind of blog post.
Well, if it is not that kind of blog post, then don’t write this. Some people have worked a lot on these pretentious and unnecessary use of category theory. It is quite unfair to claim that they just focused on looking smart while being entirely trivial unless you have pretty strong evidence (not the kind, “I worked on this before, so I know) to support it.
The assertion that any program that needs to do non-trivial IO (an app that communicates with users via a web frontend, with a database and with some third party services) is more difficult in Haskell than in other languages, seems like a useful point to discuss; to qualify and quantify as far as possible. I haven’t used Haskell in anger for anything ‘real’ like that. I can imagine it’s true; I can imagine it’s false.
There is nothing to discuss, really. Should we talk about when his whiskey will cool down? Or how cool his PhD was compared to Haskell’s puny Hask-only monads? Or how you can get some job done after talking about monoids. There really is no substance.
That wasn’t because they disagreed with him: that was because they overwhelmed him with angry comments like a pack of wolves overwhelms a prey. Either you don’t hang out on Twitter much or you must recognize how every subject has its group of fans that behave like that. It’s not even much of an indictment against the real community: the pack and the community do not necessarily have much overlap.
Here’s the most recent update of a video game I’m working on in Idris, a sort of Haskell-on-steroids: https://youtu.be/wWswmxpLrgA
The various type-level abstractions helped with making it. I guarantee to you that IO was not an issue here. I’m sure that there are people who have genuine arguments against pure functional programming (or at least legitimate grievances based on lots of experience), but I have a sense that a lot of these vague rants come from people who haven’t actually bitten the bullet and done the work.
It requires getting used to, but I’ve completely bought into the idea that you actually have to think about things like state, IO, or exceptions explicitly. By the time I got to the point where I had to load and parse game data (levels, objects, scripts, etc.), it was a natural thing for me to make use of the Either a monad (rename it into Checked and give it a fail constructor), in which this data could be validated and converted into in-game representations, using it looks something like this:
-- read a joint description for Box2D
objectCast dict = with Checked do
type <- getString "type" dict
case type of
"revolute" => with Checked do
bodyA <- getString "bodyA" dict
bodyB <- getString "bodyB" dict
localAnchorA <- getVector "localAnchorA" dict
localAnchorB <- getVector "localAnchorB" dict
collideConnected <- getBoolMaybe "collideConnected" dict
pure $ MkJointDescription bodyA localAnchorA bodyB localAnchorB collideConnected
_ => fail "joint type must be of \"revolute\""
objectCast here will produce a Checked JointDescription, which is either an error message (look at the fail call below) or a JointDescription. All of these functions can fail, for example if getString cannot find the key it was given, or if that key isn’t a string, it will produce the appropriate error. This type of code is as easy to write as it looks. I mean I don’t see anything in it that could be called boilerplate, and it’s approaching Python-level readability for someone unfamiliar with the code.
But there is so much more to this. People act like monads are some cumbersome burden that stands in the way of doing useful things. That’s plain wrong. They’re just a general class of types which happen to be useful in describing a myriad of computations (namely those that produce results and compose, for example functions are monads, but so are stateful computations). No need for fancy category theory if you bother to understand the types and do the work.
I like that you decided to make a game. The first thing people say about these high-level, verification-oriented languages vs C or C++ is you have to use the latter to make low-level and/or performant code. Then, I saw someone program an 8-bitter in ATS, high-speed crypto in SPARK Ada, embedded RTOS in Rust, and now a game in Idris. The counter-examples are coming in slowly but surely as adoption increases.
I should add however that the physics is Box2D, and that’s meant to be a sort of cornerstone of the game. Calling into Box2D is the thorniest part of the code because I was too lazy to learn how to use the Idris RTS in C properly (although I might be able to do it). My own code is still around 6 kloc atm, and probably my largest personal project so far.
The first thing people say about these high-level, verification-oriented languages vs C or C++ is you have to use the latter to make low-level and/or performant code
I mean it is just a 2D game at an early stage. Still, there’s quite a few moving parts in there, and there haven’t been any performance issues so far (or crashes! okay, there has been one problematic situation that I managed to solve, but it involved me probably not using the RTS correctly).
Re engine. Another interesting test of Idris might be a port of some 2D engine. Something lean.
Re 6000. That’s good size. Props for staying at it.
Re 8-bitter in ATS.
8-bit microcontrollers are about the cheapest CPU’s money can buy. They’re often used in embedded, special-purpose systems that run relatively-simple code over and over. They might have mere bytes of memory. Examples and a guide. 8-16bits still $1+ billion a year market. Mostly programmed in assembly or C.
ATS is a competitor to Idris. It aims for safe, systems language. This demo did simple task on 8-bitter with tiny RAM. I don’t know how easy or hard game would be. I don’t think resource use would be a problem, though. ;)
Re engine. Another interesting test of Idris might be a port of some 2D engine. Something lean.
You mean the physics engine or a game engine? Because I am making a game engine here, sort of, on top of SDL (which is a relatively thin wrapper that exposes creating windows, taking input, and rendering textures in a portable way). The game itself is ‘scripted’ via data files describing objects, levels, and behaviors (state machines).
8-bit microcontrollers are about the cheapest CPU’s money can buy […]
This abstraction, monoids, then comes with the added benefit of being abstract enough that all of your programmers can spend their time explaining it to each other instead of writing programs that use the abstraction to do IO, and therefore deal with any actual users.
Myself, I always found the IO monad a bit too all-encompassing to be useful. First, no one is keeping you from using unsafePerformIO—it will not be reflected in the type. Second, people still make libraries for logging and debug prints distinct from the “normal” IO, not always reflected in the types.
Third and most important: there’s no distinction between different kinds of impurity. Writing to the console, deleting a file, and starting a rocket to the Moon are very different things, and when they are all “equally” impure, it’s just as good as when nothing is guaranteed to be pure.
You could think of IO as an unsafe tool to build a runtime for your no-IO architecture. It’s not that Haskell gives you pure functions and IO, but nothing in between. It gives you two ends of a spectrum, so you’re free to build anything in between. Your business logic layer doesn’t even have to involve any monads at all.
First, no one is keeping you from using unsafePerformIO
Everyone is! If you open a PR that contains the string unsafePerformIO, you’ll have to prove to your team how that expression is indeed pure! And that’s the point about unsafePerformIO. It’s not there so you can use it when you’re lazy. It’s there so you can claim to the compiler that the behaviour of an expression is pure, so it can throw it around like it does to other pure expressions. (that’s also why it’s a very bad idea to use it on a genuinely impure expression.)
Second, people still make libraries for logging and debug prints distinct from the “normal” IO, not always reflected in the types.
Can you give a single example of that? I have never seen it. In fact, people usually do with logging exactly what you suggested. They usually use a Logger effect to describe expressions that want to produce logs. And then you are forced to either interpret that effect into IO, or a pure function that returns effectively a tuple (result, [logs])
Your business logic layer doesn’t even have to involve any monads at all.
I have nothing against monads, actually. I likely ended up sounding like I do, but that’s not what I meant.
Everyone is! If you open a PR that contains the string unsafePerformIO, you’ll have to prove to your team how that expression is indeed pure!
And in C, if you use an unsafe cast, you have to prove that your cast is indeed safe. ;)
My point here is that in Coq for example, you don’t need to prove it. You just can’t write a function that is not pure (or total, but that’s another story). The code that can possibly be impure is completely separated from code that can’t.
Can you give a single example of that?
Last I checked Debug.Print was alive.
They usually use a Logger effect to describe expressions that want to produce logs.
Modern logging libraries are indeed a good step in the right direction.
And in C, if you use an unsafe cast, you have to prove that your cast is indeed safe. ;)
The difference is, in C, you always have to use casts. That’s pretty much how you implement polymorphism. Also, in C, the proof that a cast is safe is non local. Somebody casts a pointer to void * in some/file.c and you cast it back to double * in other/file.c. In our codebase (~45K LOC), there’s a single unsafePerformIO inside a 4-line function. It’s used to spawn some threads to perform a number of pure computations in parallel and return their results in a vector. You can locally and easily prove that this is indeed a pure operation for all intents and purposes. That’s the whole point of unsafePerformIO, or even IO. So that you can implement your primitives without resorting to FFI or some other even darker magic.
Last I checked Debug.Print was alive.
And as the name implies, that’s just for debugging. You really won’t get any mileage out of using it in production anyway due to how poorly it interacts with laziness.
I guess my real mistake here was starting an argument without clearly stating the points:
Existence of unsafePerformIO makes Haskell as “pure” as languages with unrestricted IO—you have to prove that everything is actually safe.
The original “everything that does I/O is IO t is not much better.
We all are, or should be, heading towards algebraic effects and/or proof assistant export plus handwritten support code combination, whichever works best.
It in fact makes you so unhappy that you’ll drag the entire lost-at-sea community of category theorists into the orbit of your language just so you can have an abstraction for doing IO that fits into your model of the world.
Loved the tone of this piece! This bit right here makes it worth the price of admission IMO :)
I don’t know Haskell and also don’t know category theory, though I have an eventual desire to learn the latter, and possibly even the former since I’ve been repeatedly told doing so can engender some incredibly powerful ideas in the learner.
I love this rant because it absolutely captures my beginner experience with Haskell and PureScript.
Write 20 pure functions that do exactly the thing you want it to do - 2 hours
Now try to paramtetrize it so you can use argv[1] as an input and print the stuff back to the terminal - 10h +
(Yes, I’m simplifying here, but add a little bit of networking and even the numbers work out.) Also I’ve used functional languages before and I enjoy them. I do not enjoy this IO stuff at all.
This abstraction, monoids, then comes with the added benefit of being abstract enough that all of your programmers can spend their time explaining it to each other instead of writing programs that use the abstraction to do IO, and therefore deal with any actual users.
I would say, more importantly, that IO has little to do with Monad – the IO type is an interesting pure functional abstraction, which happens to be able to instantiate Monad but can be happily used without any knowledge of that or any language support for it, etc.
The bottom line is that laziness and side effects are, from a practical point of view, incompatible. If you
want to use a lazy language, it pretty much has to be a purely functional language; if you want to use side
effects, you had better use a strict language.
For a long time this situation was rather embarrassing for the lazy community: even the input/output story
for purely-functional languages was weak and unconvincing, let alone error recovery, concurrency, etc.
Over the last few years, a surprising solution has emerged: the monad
And that happened 20 years ago. People have been discovering other useful monad instances (parsers, blocking transactions, interruptible/serializable/resumable computation and many others) since then, to the point, I’d argue, that Monad has proven itself to be a worthy abstraction on its own, even spawning other very useful ones like Applicative, Arrow etc. Today, the fact that IO is exposed to Haskell programs through an interface that also has a Monad instance can be considered “just an instance”.
It’s like saying “Continuations have nothing to do with exceptions”, i.e. you can implement exceptions with continuations but that’s only one of the many many things they’re useful for.
Or, trying to be more constructive, an actual solution to the limitation that a non-strict, pure functional language cannot, indeed, perform side effects. However, it can define computations that, when run by a given runtime, do perform side effects. And monads are a way of doing that.
Whether that is pretentious or not is up for the reader to decide. There are other solutions, but this is Haskell’s one
The fact that the Haskell language devs managed to find an abstraction that enables sequential computation in a language that does everything to avoid sequencing computation is conceptually beautiful.
What is your claim? That creating a non-strict, purely functional language that can be used to write arbitrary programs is trivial? Or that performing IO is trivial?
Haskell is not non-strict or purely functional. It has an unsafe mode that is justified by an appeal to category theory, but simply introduces state variables and in-order execution
This directly contradicts the paper you linked above, and the Haskell report (“Haskell is a general purpose, purely functional programming language […]. Haskell provides higher-order functions, non-strict semantics, […]”)
I am not sure if you are making a general statement or talking about the very specific and constrained exceptions to the general definition of Haskell as a language, like e.g. IO.Unsafe for purity, or seq for strictness. In any case, it would be useful if you clarify what you mean with an actual example.
Secondly. you haven’t really answered my question, so I can’t really think of a productive way of discussing your original assertion about Haskell using an obscure solution for a trivial problem.
My claim was what I wrote and not about how hard things are to do. If you want to believe Haskell is purely functional despite its unsafe modes, go ahead. The quote from Peyton-Jones is about how a non-functional/non-lazy mode was added to Haskell. It doesn’t matter if the exceptions are “very specific and constrained” - except for that tiny hole, the boat is watertight; other than one wire, the circuit is insulated; except for the iceberg part, the Titanic sailed safely over the Atlantic. You can dress up global variables and text ordering of execution in Category Theory all you want, but it is what it is.
Haskell is an interesting language and it strongly encourages the use of functions that are referentially transparent (I really dislike the term “pure”) - which is an interesting approach. However, that’s as far as it goes.
You haven’t showed an example of when or where Haskell is non-strict or non transparent yet, so then I am going to assume that you and me are talking about different things. As described in the paper you linked, a non-strict language cannot be used to write code with effects directly. Haskell solves that problem by separating evaluation from execution, which is not really needed in strict languages, and is how other functional languages approach the problem of having effects.
A function returning IO a is, still, referentially transparent, and does not require strict evaluation. Particularly, a Haskell program is just a function that evaluates to a value of type IO (), and thus still non-strict and referentially transparent. The effects happen when the action described by the value returned from main is run by the runtime.
If your stance is that the fact that a Haskell program needs to be executed somehow and that makes it impure, sure, can claim that the runtime still part of the language and taints it as impure. However the interesting part is, beyond this type of semantic discussion, how to design a language where evaluation is referentially transparent and also use that language to produce executable programs.
As I said before, I do not know a “trivial” solution to that particular problem, so I am still confused about what is exactly what you claim is trivial, or whether there is a trivial solution for executing non-strict, pure code and have effects that I am not aware of.
Summary. The linked text is just a rant, didn’t address real practical problems in the Haskell language and complains about the wrong subjects, in my opinion.
The linked text infuriates me. And it is so because it is uninformative and unnecessarily spreads fear, uncertainty, doubt with no back-up data or references.
It can be read as a rant, at most, and as so it should be tagged, in my opinion.
But I want to address what I find wrong in this text.
IO
problem. Besides the fact thatmain = putStrLn "Hello, World!"
is a simpler “Hello, World!” program than its C counterpart, what I learned is that Haskell forces the programmer to think about, design and implement data transformation first, then process the input or output of it. In Java-esque language, one concerns about the business logic, then moves to the user interaction part (which will be encapsulated inIO
monad). When I wrote a compiler, interpreter and simulator (for a metabolic language) in Haskell, I learned that lesson smoothly because most of my design models were already mathematical, so I was (theoretically) mainly focused on the “business logic”. Later, when I had to add the HTML5 user interface, it was actually easy, and I could never feel thisIO
problem with my rustic MVC design.Promise
andFuture
in Java without naming it monoid or even monad.) My opinion is that a lot of programmers are “mentally lazy” and want an easy, served-on-a-plate simple solution for their problem instead of reflecting a little on concepts—design patterns are great examples of obvious instances of applied object-oriented concepts which most people can just “go to the shelf and take the canned solution” if they don’t want to design their own architecture using the basic set of object-oriented concepts.Applicative
and lists are nice to fool around with it); the easiness of polymorphism (compare it with C++ templates) and type classes; lazy evaluation; several styles for concurrency and parallelism; and so on… The list of features that makes Haskell ahead of its time (of course it is not the unique language in this group) is incredible! But when we are stuck with strict-subset languages of the 1970s (yes, most of our dear programming languages are a strict subset of ALGOL 68), then we tend to think any breakthrough language is not made for this world.Uff! I took out of my system.
However, I want to briefly point out some of my practical criticisms towards Haskell, which I (as a not-very-experienced Haskell programmer) think the article should have addressed.
It is obvious that Haskell isn’t perfect and lacks tools for “practical, industrial application”. Unfortunately, those were not addressed in the unique paragraph (out of eight) in that text.
P.S.: it seems that
<br/>
doesn’t work in item lists in this Markdown.P.P.S.: I didn’t mean to say the Rust book(s) are better than “Haskell Programming from First Principles” (@bitemyapp, I am a fan of your book and I cannot recommend it enough). What I meant is that the (series of) Rust book(s) are part of the documentation Rust documentation available (for free) in the Rust’s website and it is easy to find it, read it and understand Rust from it. In Haskell, the closest equivalents are the Haskell Wiki and the Haskell Wikibook, which isn’t as maintaned as the Rust book.
Far as memory, performance, and predictability, you might find Habit interesting. I think it stalled after they got poached for bigger projects. It’s still there to give ideas and inspiration to whoever wants to try it next.
Thank you for your link: I will take a detailed look at it.
From the language’s report, I think you are spot-on:
I guess Habit has major focus on these features, but @Leonidas has nicely answered me in another thread in which he suggests that OCaml also addresses these features better than Haskell. What do you think?
Haskel tries to be purely functional as much as possible. Ocaml is more flexible with several paradigms supported. It might be easier to do imperative stuff just for that reason. Haskell’s consistency might have advantages, too. Probably depends on your needs.
I know Jane St is all over Ocaml, including improving it. One weakness I know of Ocaml is concurrency. Haskell has better options. Ocaml usage seems to lean more toward mainstream programming, maybe making it easier to learn. Whereas, Haskell gets into mathematical concepts that might give you more new ways of thinking on top of FP in general.
That’s a simple demonstration of what is so annoying about the Haskell community. Do you honestly believe that Algol was created without mathematics? Do you think e.g. Alan Perlis and John McCarthy, two members of the ALGOL committee, lacked the deep mathematical knowledge required to understand monoids? Knuth doesn’t know or use mathematics so the poor fellow had to hack things up in MIX assembler? What about Alan Turing? There is nothing more imperative and side effect plagued than a state machine with an infinite storage unit attached. Poor Dennis Ritchie, with his Ph.D. on the Grzegorczyk hierarchy probably didn’t understand the definition of a function so he had to hack up something like C? Is that your theory? People who are not on the Haskell bus are just too lazy and innumerate to appreciate the depths of algebra needed to understand what a monoid is?
Here, I have a question for you. Function composition is associative and the identity map is trivial, so why do you need to add a monoidal structuring to Haskell if it’s about “pure” functions? The answer, to me, is that Haskell is just embrarrassed by its statefulness, there is an inherent notion of state in the structure of the program text even without all the monad nonsense because it’s not at all convenient to write a program as a single expression using function composition as the only connective e.g. a single lambda-calculus expression. Of course, contrary to ideology, this is also true of most mathematical texts even the most non-algorithmic. When you have definitions “let G be a non trivial cyclic group” and then later “Let G be the monster group”, those indicate a change of state. x is not always the same value in a mathematics textbook! Try explaining Euler’s method without some notion of step. So “pure” Haskell is, of course, statefull but shamefaced about it. And then you realize that, oh shit, the dim Algol committee maybe was not so stupid and to do complex things we need to be able to specify both state and evaluation order so you want to add more complicated imperative structures, but still retain the illusion of being in a pure function space - viola, use the endofunctor to smuggle in state variables and a fancy composition rule to compose associatively while properly connecting all the state variables. Ta da!
I usually abstain myself from this kind of comments, but I think it is important to clarify some things.
I think your comment is unnecessarily off-tone and pretentious.
From a single quote from my comment, you have inferred a lot about the Haskell community and myself.
When I stated that “[t]he syntax in not ALGOL-like (but mathematical)”, I meant it is based on the mathematical notation and not on natural language’s prose. It is not exclusive to Haskell: OCaml, SML and others have the same kind of syntax. And that is deliberated.
By the way, the members I cited above introduced more mathematical formalism to the ALGOL-68 they designed or implemented: W-grammar, recursion and ranges check (via Hoare logic expressions), respectively.
But it doesn’t exclude the fact that ALGOL has a syntax similar to prose instead of mathematical notation.
In some fields somehow related to computing, e.g. electronic engineering, to know advanced calculus is a pre-requisite to make the simplest products (for instance, Laplace transform to solve integro-differential equations).
It is not that they really need to know it, but since it is the basic concept to reach their solution, they learn and internalise it.
It might seem different in programming, but it is not. Associative operations are widespread in algorithms, and to know what a monoid is incredibly helps to compose better solutions. And it is not a difficult concept, given that 11-year students learn it!
But I see a lot of programmers complaining about learning few mathematical concepts (actually, it is a matter of just learning the naming), but learning the shiny new framework every six months.
To me, personally, it is just shallow.
I am not sure whether I am the right person to answer you this, but the
Monoid
type class is there for convenience: it is just, in OOP terms, “an interface”. (It is a type class, actually.)The reason it is there it to facilitate code-reuse: all types that derive
Monoid
have themappend
function implementation, also known as<>
“operator”.I am not aware of a single person that denies the existence of state in Haskell. As fas as I am concerned,
Monad
is a way to model those states in the language so that it fits in the type-checker and the “purism” of the language.“Algol committee maybe was not so stupid and to do complex things we need to be able to specify both state and evaluation order […]”.
The need is conjectured to not exist by the Turing-Church thesis; as I understand it, the Turing-complete models are equivalent to the sequential Turing model, but λ-calculus doesn’t require so.
Still, I you made me think on reason Haskell has the
Monoid
type class, which was a useful exercise and I hope we can keep a civil discussion.ALGOL is not based on natural language at all. FORTRAN is definitely not based on natural language - Formula Translation. C looks like recursive functions.
No. The main constraint on ALGOL surface syntax was that it had to be typed on teletype machines or even punched cards.
Here you combine a condescending approach with a claim “incredibly helps” that seems completely wrong to me. Of course, even in US elementary schools, one learns about associative operations. How does it help in programming to call that “monoidal” ?
I think programmers should learn algorithmic analysis, state machines, combinatorics, linear algebra, … I don’t see a need for abstract algeba in programming - it’s cool stuff, but …. That is, programmers need mathematics, but they do not necessarily need elementary concepts of abstract algebra awkwardly glued onto a complicated programming language.
ALGOL had distinct reference and hardware languages. They say for the reference language:
That’s such a great paper: very condensed and high level of information. But it certainly does not help the argument that ALGOL was based on natural languages - it’s clearly designed around arithmetic expression. Wikipedia has a good note on the typewritter constraints - pre ASCII!!!.
https://en.wikipedia.org/wiki/ALGOL
The great David Parnas once told me of a colleague that the colleagues highest mathematical accomplishment had been some skill using the symbol “golfball”.
It’s like the need to use positional notation instead of Roman numerals for arithemtic. In principle, Roman numerals suffice.
Which is why folks like Hoare embraced the nature of actual programs by creating mathematical ways of reasoning about imperative code. The tools with the most productive, practical use in industry follow that approach. Most use Why3 which addresses functional and imperative requirements. The idea being you use whatever fits best for your problem.
Yes, I bet Esperanto was simply ahead of its time as well. It’s the world’s fault that it wasn’t widely adopted!
I may be simplifying it too much, please let me know if I am making a straw man out of this, but my interpretation of the argument is:
P1. IO is the interface towards users
P2. In order to be considered useful for user facing programs a language has to make IO easy
P3. Haskell makes IO difficult
Conclusion: Haskell is not useful for writing user facing programs
I would not discuss P1, we can take it for granted
P2 seems fairly arbitrary to me, without deeper reasoning, it seems to me that is a weak premise. I can imagine parallel ones that are quite obviously unsound. “Your feet are the most important part of your body since they are the ones interfacing with the ground”, “Sales is the most important department of any company as they are he ones bringing in the money”, etc.
P3 Is subjective, whether it is difficult or easy depends on how well you know the tool (haskell), and what are you trying to use it for (which problem). However, let’s just focus on a supporting premise for that one, picked from the tweet that is linked
I am interpreting “harder” as most difficult, which again depends on who is doing it. It wasn’t really hard at all for me at least:
A complete beginner that hits the right tutorials would likely do something like this even before knowing that there is something called monad
And someone that really hates monads and would like them not to exist could write
The last one is just to make the point that the relationship between monads and IO is weaker than people superficially claim, once you put some time into it.
Now, some people would have a more difficult time writing any of these, but now that you know, it is not that difficult, is it?
So, if you don’t want to use a particular language, that is fine. If you want to use a particular language, that is fine too. If you want to claim that a particular language is useless, you are starting from a very difficult to sustain position. People have written usable and useful software in virtually any language, including Brainfuck and Malbolge, both of them explicitly designed not to be of practical use.
Aside
Well, if it is not that kind of blog post, then don’t write this. Some people have worked a lot on these pretentious and unnecessary use of category theory. It is quite unfair to claim that they just focused on looking smart while being entirely trivial unless you have pretty strong evidence (not the kind, “I worked on this before, so I know) to support it.
I think the author is just venting off and the article doesn’t have any useful points to discuss.
Edit: It was pointed out to me that my original comment was rude.
The assertion that any program that needs to do non-trivial IO (an app that communicates with users via a web frontend, with a database and with some third party services) is more difficult in Haskell than in other languages, seems like a useful point to discuss; to qualify and quantify as far as possible. I haven’t used Haskell in anger for anything ‘real’ like that. I can imagine it’s true; I can imagine it’s false.
This comment does not add much to the discussion and is rude as well.
There is nothing to discuss, really. Should we talk about when his whiskey will cool down? Or how cool his PhD was compared to Haskell’s puny Hask-only monads? Or how you can get some job done after talking about monoids. There really is no substance.
Even so, there’s a lot of vitriol going on here.
You’re right about that, I’m sorry, I’ll update my initial comment.
I mean, you’re not wrong.
At least he expressed his feelings in a creative and possibly humorous way without personal attacks. I can appreciate that.
I’d say calling people that disagree with you “the Haskell pack” is either a personal attack or something equally as bad.
That wasn’t because they disagreed with him: that was because they overwhelmed him with angry comments like a pack of wolves overwhelms a prey. Either you don’t hang out on Twitter much or you must recognize how every subject has its group of fans that behave like that. It’s not even much of an indictment against the real community: the pack and the community do not necessarily have much overlap.
Here’s the most recent update of a video game I’m working on in Idris, a sort of Haskell-on-steroids: https://youtu.be/wWswmxpLrgA
The various type-level abstractions helped with making it. I guarantee to you that IO was not an issue here. I’m sure that there are people who have genuine arguments against pure functional programming (or at least legitimate grievances based on lots of experience), but I have a sense that a lot of these vague rants come from people who haven’t actually bitten the bullet and done the work.
It requires getting used to, but I’ve completely bought into the idea that you actually have to think about things like state, IO, or exceptions explicitly. By the time I got to the point where I had to load and parse game data (levels, objects, scripts, etc.), it was a natural thing for me to make use of the
Either a
monad (rename it intoChecked
and give it afail
constructor), in which this data could be validated and converted into in-game representations, using it looks something like this:objectCast
here will produce aChecked JointDescription
, which is either an error message (look at thefail
call below) or aJointDescription
. All of these functions can fail, for example ifgetString
cannot find the key it was given, or if that key isn’t a string, it will produce the appropriate error. This type of code is as easy to write as it looks. I mean I don’t see anything in it that could be called boilerplate, and it’s approaching Python-level readability for someone unfamiliar with the code.But there is so much more to this. People act like monads are some cumbersome burden that stands in the way of doing useful things. That’s plain wrong. They’re just a general class of types which happen to be useful in describing a myriad of computations (namely those that produce results and compose, for example functions are monads, but so are stateful computations). No need for fancy category theory if you bother to understand the types and do the work.
I like that you decided to make a game. The first thing people say about these high-level, verification-oriented languages vs C or C++ is you have to use the latter to make low-level and/or performant code. Then, I saw someone program an 8-bitter in ATS, high-speed crypto in SPARK Ada, embedded RTOS in Rust, and now a game in Idris. The counter-examples are coming in slowly but surely as adoption increases.
:D
I should add however that the physics is Box2D, and that’s meant to be a sort of cornerstone of the game. Calling into Box2D is the thorniest part of the code because I was too lazy to learn how to use the Idris RTS in C properly (although I might be able to do it). My own code is still around 6 kloc atm, and probably my largest personal project so far.
I mean it is just a 2D game at an early stage. Still, there’s quite a few moving parts in there, and there haven’t been any performance issues so far (or crashes! okay, there has been one problematic situation that I managed to solve, but it involved me probably not using the RTS correctly).
What does this mean?
Re engine. Another interesting test of Idris might be a port of some 2D engine. Something lean.
Re 6000. That’s good size. Props for staying at it.
Re 8-bitter in ATS.
8-bit microcontrollers are about the cheapest CPU’s money can buy. They’re often used in embedded, special-purpose systems that run relatively-simple code over and over. They might have mere bytes of memory. Examples and a guide. 8-16bits still $1+ billion a year market. Mostly programmed in assembly or C.
ATS is a competitor to Idris. It aims for safe, systems language. This demo did simple task on 8-bitter with tiny RAM. I don’t know how easy or hard game would be. I don’t think resource use would be a problem, though. ;)
You mean the physics engine or a game engine? Because I am making a game engine here, sort of, on top of SDL (which is a relatively thin wrapper that exposes creating windows, taking input, and rendering textures in a portable way). The game itself is ‘scripted’ via data files describing objects, levels, and behaviors (state machines).
Ah okay. I played around with AVRs.
Oh interesting I’ve never heard of that.
“You mean the physics engine or a game engine?”
Whatever the hard stuff is that stresses languages out trying to get fast and correct.
That’s real.
Myself, I always found the IO monad a bit too all-encompassing to be useful. First, no one is keeping you from using
unsafePerformIO
—it will not be reflected in the type. Second, people still make libraries for logging and debug prints distinct from the “normal” IO, not always reflected in the types. Third and most important: there’s no distinction between different kinds of impurity. Writing to the console, deleting a file, and starting a rocket to the Moon are very different things, and when they are all “equally” impure, it’s just as good as when nothing is guaranteed to be pure.My bet is on algebraic effects.
You could think of IO as an unsafe tool to build a runtime for your no-IO architecture. It’s not that Haskell gives you pure functions and IO, but nothing in between. It gives you two ends of a spectrum, so you’re free to build anything in between. Your business logic layer doesn’t even have to involve any monads at all.
Everyone is! If you open a PR that contains the string unsafePerformIO, you’ll have to prove to your team how that expression is indeed pure! And that’s the point about unsafePerformIO. It’s not there so you can use it when you’re lazy. It’s there so you can claim to the compiler that the behaviour of an expression is pure, so it can throw it around like it does to other pure expressions. (that’s also why it’s a very bad idea to use it on a genuinely impure expression.)
Can you give a single example of that? I have never seen it. In fact, people usually do with logging exactly what you suggested. They usually use a Logger effect to describe expressions that want to produce logs. And then you are forced to either interpret that effect into IO, or a pure function that returns effectively a tuple (result, [logs])
I have nothing against monads, actually. I likely ended up sounding like I do, but that’s not what I meant.
And in C, if you use an unsafe cast, you have to prove that your cast is indeed safe. ;)
My point here is that in Coq for example, you don’t need to prove it. You just can’t write a function that is not pure (or total, but that’s another story). The code that can possibly be impure is completely separated from code that can’t.
Last I checked Debug.Print was alive.
Modern logging libraries are indeed a good step in the right direction.
The difference is, in C, you always have to use casts. That’s pretty much how you implement polymorphism. Also, in C, the proof that a cast is safe is non local. Somebody casts a pointer to
void *
insome/file.c
and you cast it back todouble *
inother/file.c
. In our codebase (~45K LOC), there’s a singleunsafePerformIO
inside a 4-line function. It’s used to spawn some threads to perform a number of pure computations in parallel and return their results in a vector. You can locally and easily prove that this is indeed a pure operation for all intents and purposes. That’s the whole point ofunsafePerformIO
, or evenIO
. So that you can implement your primitives without resorting to FFI or some other even darker magic.And as the name implies, that’s just for debugging. You really won’t get any mileage out of using it in production anyway due to how poorly it interacts with laziness.
I guess my real mistake here was starting an argument without clearly stating the points:
unsafePerformIO
makes Haskell as “pure” as languages with unrestricted IO—you have to prove that everything is actually safe.IO t
is not much better.Loved the tone of this piece! This bit right here makes it worth the price of admission IMO :)
I don’t know Haskell and also don’t know category theory, though I have an eventual desire to learn the latter, and possibly even the former since I’ve been repeatedly told doing so can engender some incredibly powerful ideas in the learner.
I love this rant because it absolutely captures my beginner experience with Haskell and PureScript.
argv[1]
as an input and print the stuff back to the terminal - 10h +(Yes, I’m simplifying here, but add a little bit of networking and even the numbers work out.) Also I’ve used functional languages before and I enjoy them. I do not enjoy this IO stuff at all.
Don’t you mean monads here?
It might be a typo, but it’s not wrong per se. Monads are monoids in a specific kind of monoidal category.
[Comment removed by author]
Monads have nothing to do with IO.
I would say, more importantly, that IO has little to do with
Monad
– theIO
type is an interesting pure functional abstraction, which happens to be able to instantiateMonad
but can be happily used without any knowledge of that or any language support for it, etc.https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/mark.pdf
Not that I agree about laziness requiring a functional language.
And that happened 20 years ago. People have been discovering other useful monad instances (parsers, blocking transactions, interruptible/serializable/resumable computation and many others) since then, to the point, I’d argue, that
Monad
has proven itself to be a worthy abstraction on its own, even spawning other very useful ones likeApplicative
,Arrow
etc. Today, the fact that IO is exposed to Haskell programs through an interface that also has aMonad
instance can be considered “just an instance”.So “Monads have nothing to do with IO” is not correct.
It’s like saying “Continuations have nothing to do with exceptions”, i.e. you can implement exceptions with continuations but that’s only one of the many many things they’re useful for.
Or, trying to be more constructive, an actual solution to the limitation that a non-strict, pure functional language cannot, indeed, perform side effects. However, it can define computations that, when run by a given runtime, do perform side effects. And monads are a way of doing that.
Whether that is pretentious or not is up for the reader to decide. There are other solutions, but this is Haskell’s one
The pretentious argument seems…projected?
The fact that the Haskell language devs managed to find an abstraction that enables sequential computation in a language that does everything to avoid sequencing computation is conceptually beautiful.
What they are doing is conceptually trivial and is only made more obscure by the category theory.
What is your claim? That creating a non-strict, purely functional language that can be used to write arbitrary programs is trivial? Or that performing IO is trivial?
Or something else?
[Comment removed by author]
This directly contradicts the paper you linked above, and the Haskell report (“Haskell is a general purpose, purely functional programming language […]. Haskell provides higher-order functions, non-strict semantics, […]”)
I am not sure if you are making a general statement or talking about the very specific and constrained exceptions to the general definition of Haskell as a language, like e.g.
IO.Unsafe
for purity, orseq
for strictness. In any case, it would be useful if you clarify what you mean with an actual example.Secondly. you haven’t really answered my question, so I can’t really think of a productive way of discussing your original assertion about Haskell using an obscure solution for a trivial problem.
My claim was what I wrote and not about how hard things are to do. If you want to believe Haskell is purely functional despite its unsafe modes, go ahead. The quote from Peyton-Jones is about how a non-functional/non-lazy mode was added to Haskell. It doesn’t matter if the exceptions are “very specific and constrained” - except for that tiny hole, the boat is watertight; other than one wire, the circuit is insulated; except for the iceberg part, the Titanic sailed safely over the Atlantic. You can dress up global variables and text ordering of execution in Category Theory all you want, but it is what it is.
Haskell is an interesting language and it strongly encourages the use of functions that are referentially transparent (I really dislike the term “pure”) - which is an interesting approach. However, that’s as far as it goes.
You haven’t showed an example of when or where Haskell is non-strict or non transparent yet, so then I am going to assume that you and me are talking about different things. As described in the paper you linked, a non-strict language cannot be used to write code with effects directly. Haskell solves that problem by separating evaluation from execution, which is not really needed in strict languages, and is how other functional languages approach the problem of having effects.
A function returning
IO a
is, still, referentially transparent, and does not require strict evaluation. Particularly, a Haskell program is just a function that evaluates to a value of typeIO ()
, and thus still non-strict and referentially transparent. The effects happen when the action described by the value returned frommain
is run by the runtime.If your stance is that the fact that a Haskell program needs to be executed somehow and that makes it impure, sure, can claim that the runtime still part of the language and taints it as impure. However the interesting part is, beyond this type of semantic discussion, how to design a language where evaluation is referentially transparent and also use that language to produce executable programs.
As I said before, I do not know a “trivial” solution to that particular problem, so I am still confused about what is exactly what you claim is trivial, or whether there is a trivial solution for executing non-strict, pure code and have effects that I am not aware of.
Really? Assembly language is similarly referentially transparent until it runs.