Nice. This paper by Philip Wadler does something similar by introducing monads through examples of their applications: http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf A bit handwavy at times, but useful because it touches on the monad laws and there’s an extended example with parsers.
This part was never hard to get for me: I unterstand how monads are defined, how they are used and that they let me write a pretty sequences of Maybe (or in this case attempt) operations.
But what I don’t get is 1. what all the singular monads practically have in common, in the case of Haskell for example: IO, State, Maybe, etc., 2. why monads are necessary (if they even are) for pure functional programming languages. 3. In what sense is this related to Monads from philosophy (simplest substance without parts)
I haven’t yet ever had the experience of seeing a problem and saying “Why, I could express this as a Monad”. And if peoole will go on writing Monad articles, which I am sure they will do, I would very much appreciate it if someone could touch on these issues.
Apparently monads were considered a kind of breakthrough for dealing with some ugly things (IO, exceptions, etc) in a purely functional programming language. I would highly recommend this paper for understanding why they were introduced into Haskell https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/mark.pdf I found the historical perspective especially interesting.
what all the singular monads practically have in common, in the case of Haskell for example: IO, State, Maybe, etc.
They have a common signature that follows the same laws. This means you can write code that works generically for any monad, and a lot of standard helper functions are already written for you, e.g. traverse or cataM. For a concrete example, in a previous codebase I had a bunch of per-client reports that had different requirements - one needed to accumulate some additional statistics over the report, and another needed a callback to their web API. So I wrote the general report logic generically in terms of any monad, and then for the client that needed the extra statistics I used Writer and for the client that needed the callbacks I used Future so that I could use a nonblocking HTTP library.
why monads are necessary (if they even are) for pure functional programming languages.
They’re never absolutely necessary, but it turns out that a lot of the time where you’d be tempted to use impure code to solve a problem, a monad lets you use the same code style but have your functions remain pure. E.g. rather than a global variable you can use Reader or State. Rather than a global logging facility you can use Writer. Rather than exceptions you can use Either. Rather than a thread-local transaction handle you can use a transaction monad. etc.
In what sense is this related to Monads from philosophy (simplest substance without parts)
Not at all, it’s an accident of terminology.
I haven’t yet ever had the experience of seeing a problem and saying “Why, I could express this as a Monad”.
Once you’re used to them you start seeing them everywhere.
State is a monad under the usual definition of equivalence used when reasoning about Haskell (in which bottom is considered equivalent to everything). Under a more natural definition of equivalence, seq is not a function and State is still a monad (defined only on the legitimate/categorical fragment of Haskell). To pretend that the weirdnesses of seq and lazy evaluation have anything to do with State specifically is grossly misleading.
I found this very readable. I learned last week from another lobsters post that I had been using monads in Rust without knowing (Option and and_then) and this post helped to reinforce what I had learned.
I think the author down-plays exceptions a little too far though. In a language like Java or Python, you could have different exception types for each step (say ScanError, ParseError, …) all subclassing a common base exception (say InterpError), then do (e.g. Python):
def interpret(program):
try:
tokens = scan(program)
ast = parse(tokens)
checked-ast = typecheck(ast)
result = eval(checked-ast)
except InterpError as e:
print(e.to_string())
It may not be as concise as:
fun compile program =
(Success program) >>= scan >>= parse >>= typecheck >>= eval
but it isn’t painful to look at or understand either.
but it isn’t painful to look at or understand either.
The problem is you can’t safely refactor it. You can’t move your calls around even when they look like they have nothing to do with each other, because changing the order they’re called in changes the control flow. And within the try: block you no longer have any way to tell which functions might error and which functions can’t, which makes it much harder to reason about possible paths through the function.
Haskell has no syntax in the core language to sequence one expression after another.
It has quite a few alternatives actually. Depending what you mean by “syntax in the core language”, there are some things with specific grammar rules in the Haskell98 and Haskell2010 standards; there are some “userspace” functions/operators (i.e. their syntax is a special case of functions/operators) which are nevertheless mandated by those standards; there are some things which the de facto implementation GHC supports (e.g. via commandline flags); etc. Here are a few:
a : b is the expression a followed by the sequence of expressions b (all of the same type)
a ++ b is the sequence a followed by the sequence b (again, of the same type)
[a, b] is a sequence of the expression a followed by the expression b (of the same type)
(a, b) is a sequence of the expression a followed by the expression b (can be different types)
f . g is the expression g followed by the expression f (input and output types must coincide)
g >>> f is the expression g followed by the expression f (same as above but their order flipped)
a -< b is the expression b followed by the expression a (must have compatible input/output types)
do { a; b } is the expression a followed by the expression b
f <$> x is the expression followed by the expression f (must have compatible input/output types)
These all define a specific order on their sub-expressions. They’re not all identical, but they follow roughly similar usage:
a : b tells a Prolog-style interpreter to perform the computation/branch a before trying those in b
a ++ b generalises the above to multiple computations (the above is equivalent to [a] ++ b)
[a, b] is a specialisation of the above, equivalent to [a] ++ [b]
(a, b) generalises [a, b] to allow different types. We can use this to implement a linear sequence (it’s essentially how GHC implements IO). Somewhat surprisingly, and completely separately to anything IO related, it also represents parallel composition
a -< b is is part of arrow notation and desugars to a mixture of sequential and parallel composition (using lambdas, >>>, (a, b), etc.)
do { a; b } is a generalisation of b . a, corresponding to join (const b <$> a), which is the most similar form to the ; operator of other languages you refer to: both because it has the same syntax (an infix ; operator) and a similar meaning (generalised composition). This can also be written as a >> b, and is related to a >>= b and a >=> b which are also built-in sequencing syntax, but didn’t seem worth their own entries.
The reason I’ve listed all these isn’t so much to say “look, there are some!”; but more to point out how many different meanings the word “sequence” can have (a list of values, an composition of functions, a temporal ordering on side-effects, etc.); how many different implementations of sequencing we can build; and, most crucially, that they all seem to overlap and interminge (e.g. the blurring of “container of values” with “context for computation”; how we can generalise a single thing like “composition” in multiple ways; how generalising seemingly-separate ideas ends up at the same result; etc.). This tells us that there’s something important lurking here. I don’t think investigating and harnessing this makes someone a wanker.
I’m an application programmer at Atlassian. A monad is a critical tool for code reuse in our applications. It’s not about PLT research or even evaluation order.
Monads only matter for representing sequential execution in extremely constrained languages, like haskell. (Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.)
This is not true. Monads are critical for code reuse. I’ve used the concept of a monad in many areas, but explicitly and critically in Scala.
I’m not being pedantic and your point is not clear. IO can be sequenced, this sequence can be abstracted, code reuse is what is gained from the abstraction. That is the total relationship between IO and monad.
This is not being pedantic, it is a very critical part of understanding monad and IO. I teach Haskell at work and have successfully corrected this mistake many times.
Suppose there existed a function that reversed a list. A few fruit grocers used this function to reverse a list of oranges. They also sometimes use it to reverse lists of apples. Other things happened with this function also, but we only know of these specific circumstances.
Suppose then someone came along and proclaimed, “the reverse function is all about fruit!” then they wrote an article about this new apparent fact. Would you be able to clearly see a categorical error occurring here? What would you say to the article author? Would you reverse a list of list of functions right in front of their face? Or reverse a list of wiggley-woos? What if that person then replied, “you’re just being pedantic”? Where would you take the discussion from here? Would you be the meany person who informs them that they have almost no grasp of the subject matter? It’s quite a bind to be in :)
That’s exactly the error being made here (among some others) and it is a very obvious error only to those who have a concrete understanding of what monad means. It’s not pedantic. It’s not “avoiding a debate.” It’s a significant categorical error, and it is very common among beginners. It limits any further understanding so significantly, that it is better to have no knowledge at all. This specific error is also commonly repeated among beginners, as they struggle and aspire to understand the subject, and to the point that it becomes very difficult to stamp out, even for many of those who know the subject well. The ultimate consequence is a net overall lack of progress in understanding, for absolutely everyone.
Haskell has no syntax in the core language to sequence one expression after another.
Yes it does: do-notation. You can even use semicolons if you don’t like newlines. It’s the syntax to sequence expressions which can be sequenced. You can’t use semicolons to sequence things that can’t be sequenced in other languages, either.
And why talk about Maybe but not MonadPlus, free monads, transformers…? All you know is Maybe and IO? Of course it’s boring to you. In stead of writing blog sized posts about how blog tutorials don’t teach you everything you could read up, but oh well you do you.
By changing each stage to take and return a fat outer type holding the entire context, you can just as easily achieve the cool pipeline effect by defining >>= as function composition rather than bind.
With bind you don’t have to change each stage.
Understanding how to write programs which allow change without triggering catastrophic rewrites is pretty useful.
Understanding why some programs are easy to modify is pretty useful.
Having language to discuss why some programs are easy to modify and others are not; also pretty useful.
The original post is about how thinking in terms of Monads can make a program which is hard to modify into a program which is easy to modify, it’s a useful post.
Some people believe monads are useful for other things, but I’m not interested in that debate, I’m just talking about where monads are certainly important.
First of all, by far the most popular monadic interface in modern software development is not Haskell’s IO type, it’s JavaScript’s Promise type, together with similar systems for writing asynchronous logic in other languages. If we’re talking about use cases where monads are “certainly important,” I think it’s worth mentioning the large number of programmers writing monadic code on a daily basis in languages which certainly do not lack native support for semicolons.
I love monads and find they’re actually among the most useful and important tools I’ve ever acquired as a programmer, but I agree that the PLT and functional programming communities could do a better job communicating exactly why monads are actually important. The use of monads as “extendable semicolons” does have some narrow but critically important use cases, such as asynchronous code, exception handling, and recursive backtracking, but I actually believe that the exotic forms of control flow you can express with monads is of only secondary importance.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not. Given that a function’s interaction with an environment is an extremely important aspect of its semantics, this is information that you would be informally documenting and keeping track of anyway; Haskell just allows you document it in a precise, machine-checked format with great integration with the compiler.
This immediately allows you to separate functions which perform IO from those which do not, but that’s not actually the coolest part. The coolest part is that once you start defining your own monad types, you can express much much more precise and interesting classes of side effects, like “a function that interacts only with a random number generator” or “a function that interacts only with my database state” or “a function which interacts only with a sequential-identifier generator.” This is the real power of monads: the ability to make fine-grained guarantees about the data dependencies and side effects of a function given only its type signature.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not.
One nit here: the idea of separating pure and effectful operations is actually pretty old. You see this in Pascal and Ada and the like, where “functions” are pure and “procedures” are effectful. This is baked into the core language semantics. The different terms fell out of favor when C/C++ got big, and now people don’t really distinguish them anymore. But there’s no reason we couldn’t start doing that again, aside from inertia and stuff.
To my understanding you also don’t need monads to separate effects in pure FP, either; Eff has first-class syntax for effect handlers and takes measures to distinguish the theory from monads.
To my understanding you also don’t need monads to separate effects in pure FP, either;
Well, you need something. Proposing to have an effect system without monads is like proposing to do security without passwords: there are some interesting possibilities there, but you have to explain how you’re going to solve the problems that monads solve.
Your Eff paper refers to papers on effect tensors to justify the claim that effects can be easier to combine than monads, but then doesn’t seem to actually model those tensors? Their example of what combining effects looks like in practice seems to end in just letting them be composed in the same order that the primary code is composed, when the whole point of a pure language is to be able to get away from that. So while the language is pure at the level of individual effects, it seems to be effectively impure in terms of how composition of effects behaves?
It’s not specific to Javascript. The type Task<T> is the same interface in C#, Future<V> is the same thing in Java. The concept is generally useful in all languages. It’s useful even in Haskell, where Async allows the introduction of explicit concurrency, even though the runtime automatically does the work that Task<T> is mostly for in C# (avoiding blocking on threads).
In addition async/await is a monadic syntax, which is generally useful (as evidenced by it now being in C#, F#, Scala, Javascript, Python, and soon C++).
(LINQ in C# is another generally-useful monadic syntax, which is used for just about everything except doing IO and sequencing.)
Nice. This paper by Philip Wadler does something similar by introducing monads through examples of their applications: http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf A bit handwavy at times, but useful because it touches on the monad laws and there’s an extended example with parsers.
This part was never hard to get for me: I unterstand how monads are defined, how they are used and that they let me write a pretty sequences of
Maybe
(or in this caseattempt
) operations.But what I don’t get is 1. what all the singular monads practically have in common, in the case of Haskell for example: IO, State, Maybe, etc., 2. why monads are necessary (if they even are) for pure functional programming languages. 3. In what sense is this related to Monads from philosophy (simplest substance without parts)
I haven’t yet ever had the experience of seeing a problem and saying “Why, I could express this as a Monad”. And if peoole will go on writing Monad articles, which I am sure they will do, I would very much appreciate it if someone could touch on these issues.
Apparently monads were considered a kind of breakthrough for dealing with some ugly things (IO, exceptions, etc) in a purely functional programming language. I would highly recommend this paper for understanding why they were introduced into Haskell https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/mark.pdf I found the historical perspective especially interesting.
That’s a great paper! He makes it a lot more intuitive than most explanations.
They have a common signature that follows the same laws. This means you can write code that works generically for any monad, and a lot of standard helper functions are already written for you, e.g.
traverse
orcataM
. For a concrete example, in a previous codebase I had a bunch of per-client reports that had different requirements - one needed to accumulate some additional statistics over the report, and another needed a callback to their web API. So I wrote the general report logic generically in terms of any monad, and then for the client that needed the extra statistics I usedWriter
and for the client that needed the callbacks I usedFuture
so that I could use a nonblocking HTTP library.They’re never absolutely necessary, but it turns out that a lot of the time where you’d be tempted to use impure code to solve a problem, a monad lets you use the same code style but have your functions remain pure. E.g. rather than a global variable you can use
Reader
orState
. Rather than a global logging facility you can useWriter
. Rather than exceptions you can useEither
. Rather than a thread-local transaction handle you can use a transaction monad. etc.Not at all, it’s an accident of terminology.
Once you’re used to them you start seeing them everywhere.
Ostensibly they follow the same laws. But sometimes people let them break the laws at the edge cases. For example, State is not a monad.
State is a monad under the usual definition of equivalence used when reasoning about Haskell (in which bottom is considered equivalent to everything). Under a more natural definition of equivalence,
seq
is not a function and State is still a monad (defined only on the legitimate/categorical fragment of Haskell). To pretend that the weirdnesses ofseq
and lazy evaluation have anything to do with State specifically is grossly misleading.I found this very readable. I learned last week from another lobsters post that I had been using monads in Rust without knowing (
Option
andand_then
) and this post helped to reinforce what I had learned.I think the author down-plays exceptions a little too far though. In a language like Java or Python, you could have different exception types for each step (say
ScanError
,ParseError
, …) all subclassing a common base exception (sayInterpError
), then do (e.g. Python):It may not be as concise as:
but it isn’t painful to look at or understand either.
(I’ll probably be flamed for this!)
The problem is you can’t safely refactor it. You can’t move your calls around even when they look like they have nothing to do with each other, because changing the order they’re called in changes the control flow. And within the
try:
block you no longer have any way to tell which functions might error and which functions can’t, which makes it much harder to reason about possible paths through the function.Thanks, one of the better explanations I’ve come across!
[Comment removed by author]
It has quite a few alternatives actually. Depending what you mean by “syntax in the core language”, there are some things with specific grammar rules in the Haskell98 and Haskell2010 standards; there are some “userspace” functions/operators (i.e. their syntax is a special case of functions/operators) which are nevertheless mandated by those standards; there are some things which the de facto implementation GHC supports (e.g. via commandline flags); etc. Here are a few:
a : b
is the expressiona
followed by the sequence of expressionsb
(all of the same type)a ++ b
is the sequencea
followed by the sequenceb
(again, of the same type)[a, b]
is a sequence of the expressiona
followed by the expressionb
(of the same type)(a, b)
is a sequence of the expressiona
followed by the expressionb
(can be different types)f . g
is the expressiong
followed by the expressionf
(input and output types must coincide)g >>> f
is the expressiong
followed by the expressionf
(same as above but their order flipped)a -< b
is the expressionb
followed by the expressiona
(must have compatible input/output types)do { a; b }
is the expressiona
followed by the expressionb
f <$> x
is the expression followed by the expressionf
(must have compatible input/output types)These all define a specific order on their sub-expressions. They’re not all identical, but they follow roughly similar usage:
a : b
tells a Prolog-style interpreter to perform the computation/brancha
before trying those inb
a ++ b
generalises the above to multiple computations (the above is equivalent to[a] ++ b
)[a, b]
is a specialisation of the above, equivalent to[a] ++ [b]
(a, b)
generalises[a, b]
to allow different types. We can use this to implement a linear sequence (it’s essentially how GHC implementsIO
). Somewhat surprisingly, and completely separately to anythingIO
related, it also represents parallel compositionf . g
is a rather general form of compositiong >>> f
is the same as abovea -< b
is is part of arrow notation and desugars to a mixture of sequential and parallel composition (using lambdas,>>>
,(a, b)
, etc.)do { a; b }
is a generalisation ofb . a
, corresponding tojoin (const b <$> a)
, which is the most similar form to the;
operator of other languages you refer to: both because it has the same syntax (an infix;
operator) and a similar meaning (generalised composition). This can also be written asa >> b
, and is related toa >>= b
anda >=> b
which are also built-in sequencing syntax, but didn’t seem worth their own entries.f <$> x
is generalised application off
tox
. That generality also makes it a composition/pipeline operatorThe reason I’ve listed all these isn’t so much to say “look, there are some!”; but more to point out how many different meanings the word “sequence” can have (a list of values, an composition of functions, a temporal ordering on side-effects, etc.); how many different implementations of sequencing we can build; and, most crucially, that they all seem to overlap and interminge (e.g. the blurring of “container of values” with “context for computation”; how we can generalise a single thing like “composition” in multiple ways; how generalising seemingly-separate ideas ends up at the same result; etc.). This tells us that there’s something important lurking here. I don’t think investigating and harnessing this makes someone a wanker.
[Comment removed by author]
I’m an application programmer at Atlassian. A monad is a critical tool for code reuse in our applications. It’s not about PLT research or even evaluation order.
This is not true. Monads are critical for code reuse. I’ve used the concept of a monad in many areas, but explicitly and critically in Scala.
[Comment removed by author]
It’s not true. IO existed in Haskell way before Wadler’s papers on monads.
[Comment removed by author]
Still not true. IO has a monad, IO is not about monad, nor the other way around.
[Comment removed by author]
[Comment removed by author]
I’m not being pedantic and your point is not clear. IO can be sequenced, this sequence can be abstracted, code reuse is what is gained from the abstraction. That is the total relationship between IO and monad.
[Comment removed by author]
Monad is about much more than IO. IO is about much more than monad.
Objects and classes have a different relationship.
[Comment removed by author]
This is not being pedantic, it is a very critical part of understanding monad and IO. I teach Haskell at work and have successfully corrected this mistake many times.
[Comment removed by author]
Suppose there existed a function that reversed a list. A few fruit grocers used this function to reverse a list of oranges. They also sometimes use it to reverse lists of apples. Other things happened with this function also, but we only know of these specific circumstances.
Suppose then someone came along and proclaimed, “the reverse function is all about fruit!” then they wrote an article about this new apparent fact. Would you be able to clearly see a categorical error occurring here? What would you say to the article author? Would you reverse a list of list of functions right in front of their face? Or reverse a list of wiggley-woos? What if that person then replied, “you’re just being pedantic”? Where would you take the discussion from here? Would you be the meany person who informs them that they have almost no grasp of the subject matter? It’s quite a bind to be in :)
That’s exactly the error being made here (among some others) and it is a very obvious error only to those who have a concrete understanding of what monad means. It’s not pedantic. It’s not “avoiding a debate.” It’s a significant categorical error, and it is very common among beginners. It limits any further understanding so significantly, that it is better to have no knowledge at all. This specific error is also commonly repeated among beginners, as they struggle and aspire to understand the subject, and to the point that it becomes very difficult to stamp out, even for many of those who know the subject well. The ultimate consequence is a net overall lack of progress in understanding, for absolutely everyone.
Who wants to contribute to that?
Yes it does: do-notation. You can even use semicolons if you don’t like newlines. It’s the syntax to sequence expressions which can be sequenced. You can’t use semicolons to sequence things that can’t be sequenced in other languages, either.
And why talk about
Maybe
but not MonadPlus, free monads, transformers…? All you know isMaybe
andIO
? Of course it’s boring to you. In stead of writing blog sized posts about how blog tutorials don’t teach you everything you could read up, but oh well you do you.[Comment removed by author]
With bind you don’t have to change each stage.
Understanding how to write programs which allow change without triggering catastrophic rewrites is pretty useful.
Understanding why some programs are easy to modify is pretty useful.
Having language to discuss why some programs are easy to modify and others are not; also pretty useful.
The original post is about how thinking in terms of Monads can make a program which is hard to modify into a program which is easy to modify, it’s a useful post.
[Comment removed by author]
First of all, by far the most popular monadic interface in modern software development is not Haskell’s
IO
type, it’s JavaScript’sPromise
type, together with similar systems for writing asynchronous logic in other languages. If we’re talking about use cases where monads are “certainly important,” I think it’s worth mentioning the large number of programmers writing monadic code on a daily basis in languages which certainly do not lack native support for semicolons.I love monads and find they’re actually among the most useful and important tools I’ve ever acquired as a programmer, but I agree that the PLT and functional programming communities could do a better job communicating exactly why monads are actually important. The use of monads as “extendable semicolons” does have some narrow but critically important use cases, such as asynchronous code, exception handling, and recursive backtracking, but I actually believe that the exotic forms of control flow you can express with monads is of only secondary importance.
In my experience, the most important consequence of modeling side effects with monads is that it allows you to reliably distinguish between pure and impure functions. The features which you claim make Haskell “extremely constrained” in fact give it an entirely new dimension of expressive power, because whereas most languages only have one form of function, which is implicitly allowed to perform side effects, Haskell has two forms of functions: functions which may perform side effects, and functions which may not. Given that a function’s interaction with an environment is an extremely important aspect of its semantics, this is information that you would be informally documenting and keeping track of anyway; Haskell just allows you document it in a precise, machine-checked format with great integration with the compiler.
This immediately allows you to separate functions which perform
IO
from those which do not, but that’s not actually the coolest part. The coolest part is that once you start defining your own monad types, you can express much much more precise and interesting classes of side effects, like “a function that interacts only with a random number generator” or “a function that interacts only with my database state” or “a function which interacts only with a sequential-identifier generator.” This is the real power of monads: the ability to make fine-grained guarantees about the data dependencies and side effects of a function given only its type signature.One nit here: the idea of separating pure and effectful operations is actually pretty old. You see this in Pascal and Ada and the like, where “functions” are pure and “procedures” are effectful. This is baked into the core language semantics. The different terms fell out of favor when C/C++ got big, and now people don’t really distinguish them anymore. But there’s no reason we couldn’t start doing that again, aside from inertia and stuff.
To my understanding you also don’t need monads to separate effects in pure FP, either; Eff has first-class syntax for effect handlers and takes measures to distinguish the theory from monads.
Well, you need something. Proposing to have an effect system without monads is like proposing to do security without passwords: there are some interesting possibilities there, but you have to explain how you’re going to solve the problems that monads solve.
Your Eff paper refers to papers on effect tensors to justify the claim that effects can be easier to combine than monads, but then doesn’t seem to actually model those tensors? Their example of what combining effects looks like in practice seems to end in just letting them be composed in the same order that the primary code is composed, when the whole point of a pure language is to be able to get away from that. So while the language is pure at the level of individual effects, it seems to be effectively impure in terms of how composition of effects behaves?
[Comment removed by author]
It’s not specific to Javascript. The type
Task<T>
is the same interface in C#,Future<V>
is the same thing in Java. The concept is generally useful in all languages. It’s useful even in Haskell, whereAsync
allows the introduction of explicit concurrency, even though the runtime automatically does the work thatTask<T>
is mostly for in C# (avoiding blocking on threads).In addition
async
/await
is a monadic syntax, which is generally useful (as evidenced by it now being in C#, F#, Scala, Javascript, Python, and soon C++).(LINQ in C# is another generally-useful monadic syntax, which is used for just about everything except doing IO and sequencing.)