https://www.youtube.com/watch?v=EJGrl4gJxx4
This is track 3 from the CD. It’s cheesy soft synth video game muzak. It’s in high quality on YouTube. Awesome insight in the process of ripping, but why for this particular track?
Oh, that music! I never been a fan of Ragnarok Online and even only played it on my own local bootleg server, alone, in 2005, but I somewhat liked this game because it felt very weird. Lots of nostalgia feelings when I listened to this music now.
[Comment removed by author]
Haskell has no syntax in the core language to sequence one expression after another.
It has quite a few alternatives actually. Depending what you mean by “syntax in the core language”, there are some things with specific grammar rules in the Haskell98 and Haskell2010 standards; there are some “userspace” functions/operators (i.e. their syntax is a special case of functions/operators) which are nevertheless mandated by those standards; there are some things which the de facto implementation GHC supports (e.g. via commandline flags); etc. Here are a few:
a : b is the expression a followed by the sequence of expressions b (all of the same type)a ++ b is the sequence a followed by the sequence b (again, of the same type)[a, b] is a sequence of the expression a followed by the expression b (of the same type)(a, b) is a sequence of the expression a followed by the expression b (can be different types)f . g is the expression g followed by the expression f (input and output types must coincide)g >>> f is the expression g followed by the expression f (same as above but their order flipped)a -< b is the expression b followed by the expression a (must have compatible input/output types)do { a; b } is the expression a followed by the expression b
f <$> x is the expression followed by the expression f (must have compatible input/output types)These all define a specific order on their sub-expressions. They’re not all identical, but they follow roughly similar usage:
a : b tells a Prolog-style interpreter to perform the computation/branch a before trying those in b
a ++ b generalises the above to multiple computations (the above is equivalent to [a] ++ b)[a, b] is a specialisation of the above, equivalent to [a] ++ [b]
(a, b) generalises [a, b] to allow different types. We can use this to implement a linear sequence (it’s essentially how GHC implements IO). Somewhat surprisingly, and completely separately to anything IO related, it also represents parallel composition
f . g is a rather general form of composition
g >>> f is the same as above
a -< b is is part of arrow notation and desugars to a mixture of sequential and parallel composition (using lambdas, >>>, (a, b), etc.)do { a; b } is a generalisation of b . a, corresponding to join (const b <$> a), which is the most similar form to the ; operator of other languages you refer to: both because it has the same syntax (an infix ; operator) and a similar meaning (generalised composition). This can also be written as a >> b, and is related to a >>= b and a >=> b which are also built-in sequencing syntax, but didn’t seem worth their own entries.f <$> x is generalised application of f to x. That generality also makes it a composition/pipeline operator
The reason I’ve listed all these isn’t so much to say “look, there are some!”; but more to point out how many different meanings the word “sequence” can have (a list of values, an composition of functions, a temporal ordering on side-effects, etc.); how many different implementations of sequencing we can build; and, most crucially, that they all seem to overlap and interminge (e.g. the blurring of “container of values” with “context for computation”; how we can generalise a single thing like “composition” in multiple ways; how generalising seemingly-separate ideas ends up at the same result; etc.). This tells us that there’s something important lurking here. I don’t think investigating and harnessing this makes someone a wanker.
[Comment removed by author]
I’m an application programmer at Atlassian. A monad is a critical tool for code reuse in our applications. It’s not about PLT research or even evaluation order.
Monads only matter for representing sequential execution in extremely constrained languages, like haskell. (Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.)
This is not true. Monads are critical for code reuse. I’ve used the concept of a monad in many areas, but explicitly and critically in Scala.
[Comment removed by author]
[Comment removed by author]
[Comment removed by author]
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO (IO m) k = IO (\ s -> case m s of (# new_s, a #) -> unIO (k a) new_s)
[Comment removed by author]
I’m not being pedantic and your point is not clear. IO can be sequenced, this sequence can be abstracted, code reuse is what is gained from the abstraction. That is the total relationship between IO and monad.
[Comment removed by author]
Monad is about much more than IO. IO is about much more than monad.
Objects and classes have a different relationship.
[Comment removed by author]
This is not being pedantic, it is a very critical part of understanding monad and IO. I teach Haskell at work and have successfully corrected this mistake many times.
Suppose there existed a function that reversed a list. A few fruit grocers used this function to reverse a list of oranges. They also sometimes use it to reverse lists of apples. Other things happened with this function also, but we only know of these specific circumstances.
Suppose then someone came along and proclaimed, “the reverse function is all about fruit!” then they wrote an article about this new apparent fact. Would you be able to clearly see a categorical error occurring here? What would you say to the article author? Would you reverse a list of list of functions right in front of their face? Or reverse a list of wiggley-woos? What if that person then replied, “you’re just being pedantic”? Where would you take the discussion from here? Would you be the meany person who informs them that they have almost no grasp of the subject matter? It’s quite a bind to be in :)
That’s exactly the error being made here (among some others) and it is a very obvious error only to those who have a concrete understanding of what monad means. It’s not pedantic. It’s not “avoiding a debate.” It’s a significant categorical error, and it is very common among beginners. It limits any further understanding so significantly, that it is better to have no knowledge at all. This specific error is also commonly repeated among beginners, as they struggle and aspire to understand the subject, and to the point that it becomes very difficult to stamp out, even for many of those who know the subject well. The ultimate consequence is a net overall lack of progress in understanding, for absolutely everyone.
Who wants to contribute to that?
Haskell has no syntax in the core language to sequence one expression after another.
Yes it does: do-notation. You can even use semicolons if you don’t like newlines. It’s the syntax to sequence expressions which can be sequenced. You can’t use semicolons to sequence things that can’t be sequenced in other languages, either.
And why talk about Maybe but not MonadPlus, free monads, transformers…? All you know is Maybe and IO? Of course it’s boring to you. In stead of writing blog sized posts about how blog tutorials don’t teach you everything you could read up, but oh well you do you.
By changing each stage to take and return a fat outer type holding the entire context, you can just as easily achieve the cool pipeline effect by defining >>= as function composition rather than bind.
With bind you don’t have to change each stage.
Understanding how to write programs which allow change without triggering catastrophic rewrites is pretty useful.
Understanding why some programs are easy to modify is pretty useful.
Having language to discuss why some programs are easy to modify and others are not; also pretty useful.
The original post is about how thinking in terms of Monads can make a program which is hard to modify into a program which is easy to modify, it’s a useful post.
Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.
First of all, by far the most popular monadic interface in modern software development is not Haskell’s IO type, it’s JavaScript’s Promise type, together with similar systems for writing asynchronous logic in other languages. If we’re talking about use cases where monads are “certainly important,” I think it’s worth mentioning the large number of programmers writing monadic code on a daily basis in languages which certainly do not lack native support for semicolons.
I love monads and find they’re actually among the most useful and important tools I’ve ever acquired as a programmer, but I agree that the PLT and functional programming communities could do a better job communicating exactly why monads are actually important. The use of monads as “extendable semicolons” does have some narrow but critically important use cases, such as asynchronous code, exception handling, and recursive backtracking, but I actually believe that the exotic forms of control flow you can express with monads is of only secondary importance.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not. Given that a function’s interaction with an environment is an extremely important aspect of its semantics, this is information that you would be informally documenting and keeping track of anyway; Haskell just allows you document it in a precise, machine-checked format with great integration with the compiler.
This immediately allows you to separate functions which perform IO from those which do not, but that’s not actually the coolest part. The coolest part is that once you start defining your own monad types, you can express much much more precise and interesting classes of side effects, like “a function that interacts only with a random number generator” or “a function that interacts only with my database state” or “a function which interacts only with a sequential-identifier generator.” This is the real power of monads: the ability to make fine-grained guarantees about the data dependencies and side effects of a function given only its type signature.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not.
One nit here: the idea of separating pure and effectful operations is actually pretty old. You see this in Pascal and Ada and the like, where “functions” are pure and “procedures” are effectful. This is baked into the core language semantics. The different terms fell out of favor when C/C++ got big, and now people don’t really distinguish them anymore. But there’s no reason we couldn’t start doing that again, aside from inertia and stuff.
To my understanding you also don’t need monads to separate effects in pure FP, either; Eff has first-class syntax for effect handlers and takes measures to distinguish the theory from monads.
To my understanding you also don’t need monads to separate effects in pure FP, either;
Well, you need something. Proposing to have an effect system without monads is like proposing to do security without passwords: there are some interesting possibilities there, but you have to explain how you’re going to solve the problems that monads solve.
Your Eff paper refers to papers on effect tensors to justify the claim that effects can be easier to combine than monads, but then doesn’t seem to actually model those tensors? Their example of what combining effects looks like in practice seems to end in just letting them be composed in the same order that the primary code is composed, when the whole point of a pure language is to be able to get away from that. So while the language is pure at the level of individual effects, it seems to be effectively impure in terms of how composition of effects behaves?
[Comment removed by author]
It’s not specific to Javascript. The type Task<T> is the same interface in C#, Future<V> is the same thing in Java. The concept is generally useful in all languages. It’s useful even in Haskell, where Async allows the introduction of explicit concurrency, even though the runtime automatically does the work that Task<T> is mostly for in C# (avoiding blocking on threads).
In addition async/await is a monadic syntax, which is generally useful (as evidenced by it now being in C#, F#, Scala, Javascript, Python, and soon C++).
(LINQ in C# is another generally-useful monadic syntax, which is used for just about everything except doing IO and sequencing.)
Recall what happened: we decided to represent characters of text, including punctuation and whitespace, using fixed numbers (a practical, though rather dubious, decision; again, a product of early computing)
Around here is where he lost me, how else are we supposed to do it? As he admits, computers just manipulate bits, if you want to represent anything else you’ll need a mapping from those bits to your character set.
There’s no term for it, but the alternative is structured data: a “binary” AST with data at the leafs, and if that data is text it has to be with explicit mentioning of the character table.
Everything just becomes a single node containing a blob of text again.
Messaging actually shows promise, as does networking, but I think the former is more likely than the latter (despite much more energy being put into the latter by researchers)
Why would everything become a single node with a blob of text? This is evidently true when it would have to work with systems that were primarily made to deal with unstructured text files, but that’s the circular argument addressed in the article.
Messaging and networking are different solutions for problems not mentioned in the article. Not sure how it would help with the “build and destroy the world” issue.
We get blobs of text because of serialisation, something we need to do to stream out to spinning platters made of rust, or to beam waves over a wire.
We often prefer to choose the serialisation format because it’s always faster than a general purpose serialisation format one might try for baking down that “AST”: Remember, even if we have no other opinion we can choose from S-expressions and XML and JSON and ASN.1 and PkZip and so on. Each with different disadvantages.
And once you serialise, you might as well freeze them someplace. Maybe a hierarchy. This thing is called a file system, and those frozen blobs are called files.
Messaging and networking are a way to build a platform that doesn’t have a filesystem of files. They aren’t mentioned in the article, but then: no solutions are really offered by the article.
Messaging and networking are a way to build a platform that doesn’t have a filesystem of files. They aren’t mentioned in the article, but then: no solutions are really offered by the article.
This is interesting, what do you mean by using messaging as a substitute for a filesystem, what would that look like?
iOS does something like this (awkwardly; through a blessed but ad hoc mechanism). You send a message to another app- and ask it send a message back to you.
One obvious use is storing things that we used to store in files, like photos and preference and music, but we can also use it to authenticate (who are you), to authorise (do you allow this), to purchase, and perhaps for other things.
Urbit is exploring some of these themes in a much more grand scale, but is so much less a “complete” operating environment at this point to teach us what computing will be like in this way.
HDCP/HDMI has another (limited) use of this where you can play a (protected) video at an x/y/w/h without revealing the bits.
The Mother of All Demos hinted at some of this with their collaborative single user super computer.
And so on.
You’re a hero
Yes. Also, a great example of democracy at work. A citizen saw a problem, collected data, got a proposal to government, and government actually fixed the problem. We need more like him.