Right-infix . does not “feel normal”. It feels horribly confusing. The random mix of which ways operators associate is possibly the worst part of trying to read haskell.

It’s obvious from the types, and from the way fmap behaves on e.g. collections, that we should have

f.fmap(g)(x) = g(f(x))

It’s not confusing at all. And if you really want to invert the argument order like you would in Haskell, write a helper method:

Now you can’t make this abstraction as nice as you’d like because you don’t have higher-kinded types. That’s a real criticism, and something worth proposing for future Java (though even Haskell hides those behind an extension, and for good reason). But that’s not the case this article is making. That the second argument (i.e. not the distinguished “self” argument) to a method can’t magically change the behaviour of the method is a feature in Java, not a bug, and an occasionally unnatural calling order is a price well worth paying for the overall readability gain.

The point about higher-kinded types wasn’t emphasized because up until this particular case (at least in the library I’m working on), I haven’t hit a wall where the nature of the argument order really elucidated the lack of congruence between an industry-standard OO language syntax and (what I consider to be) a de-facto functional one - in that sense, I suppose the article is more of an expedition through one of the practical scenarios in which irreconcilable differences in ergonomics really starts to show.

Interestingly (or maybe not, I thought it was), this issue of ergonomics hasn’t actually been a noticeably bad issue until Functor#fmap, due to the natural ordering of most function arguments I’ve dealt with so far being sensible or intuitively correct - i.e. filter taking the function first and then the list makes currying much more useful in the general case (although, again, I suppose even here YMMV).

However, with fmap, it was the first blatantly incongruent case that feels unintuitive to me. You can certainly argue that Haskell’s compose being right-infix is only intuitive to those who have been conditioned by it, but I’d also refer you to Java8’s own Function#compose - not as counter evidence so much, but as another well-deserving target for your aim (also, another reason that if a core Functor interface existed in the JDK, Function certainly couldn’t implement it without some terribly confusing acrobatics).

Even with the idea of a static fmap that forwards to the functor instance (which is a thoughtful idea), now we have 2 fmaps that, for MonadicFunction as the Functor, will take 2 MonadicFunctions and, based on whether they’re static or instance methods, will reverse the argument order? Hazarding this, for me, was unacceptable.

Incidentally, I probably will pull out an fmap method object that implements DyadicFunction (similar to BiFunction in Java, but also a Functor, MonadicFunction, and ProFunctor) to get simulated first-class function support along with the usual trimmings of currying, flipping, uncurrying, etc., but it will certainly preserve Functor-first argument ordering to maintain congruence with the implicit ordering in Functor.

Interestingly (or maybe not, I thought it was), this issue of ergonomics hasn’t actually been a noticeably bad issue until Functor#fmap, due to the natural ordering of most function arguments I’ve dealt with so far being sensible or intuitively correct - i.e. filter taking the function first and then the list makes currying much more useful in the general case (although, again, I suppose even here YMMV).

I don’t understand the point you’re making here. In OO-land we’d write

list.filter(f)
functor.fmap(f)

whereas in functional-land this would be

filter f list
fmap f functor

Aren’t both cases the same? You’re right that the Haskell order is more appropriate for currying, but that’s really a Haskell-specific concern. (Indeed I’d argue that several Haskell functions adopt an “unnatural” argument order so as to best take advantage of Haskell’s language-level currying support)

You can certainly argue that Haskell’s compose being right-infix is only intuitive to those who have been conditioned by it, but I’d also refer you to Java8’s own Function#compose - not as counter evidence so much, but as another well-deserving target for your aim

I find Function#compose very confusing and would always use Function#andThen instead in my own code. (Indeed, when I read code that uses Function#compose, the only way I can remember which way around it goes is to first remember that andThen does the obvious thing that andThen sounds like it would, and that therefore compose must be the opposite way around)

Thanks for sticking with this, this exchange is helping me understand the failings of clarity and concision in my post - from what I gather, you are precisely the audience I was writing for.

There are two precedents I care about preserving, that, in Java’s case, cannot both be preserved:

fmap is an alias for compose when the functor is a function

compose accepts f as its first argument and g as its second - more importantly, can be read left to right (here is where it breaks down as an fmap method invocation on Functor, where a covariant function A -> B as its sole argument forces reverse composition.

The point is that there is no way to actually mimic the left-to-right compositional style, which is frustrating if you like it (I understand your points that for you, this is not a problem, and possibly even poetic justice).

public interface Functor<A> {
<B> Functor<B> fmap(MonadicFunction<? super A, ? extends B> fn);
}
// MonadicFunction<A,B> is an instance of Functor<B>, as you might expect
MonadicFunction<Integer, Integer> plusThree = x -> x + 3;
MonadicFunction<Integer, Integer> timesTwo = x -> x * 2;
Integer result = plusThree.fmap(timesTwo).apply(1); // 8; plusThree is the Functor, timesTwo is the lifted function

And now in haskell:

plusThree = (+3)
timesTwo = (*2)
main :: IO ()
main = print . fmap plusThree timesTwo $ 1 -- 5; plusThree is not the functor, timesTwo is

Here is where the symmetry breaks down. The reason that Java8’s Function#compose (as well as Scala, Clojure, Swift, Standard ML, etc.) works the typical way is that it’s not implemented as a call to an #fmap on a Functor; otherwise, it’d have the same problem.

Believe it or not, I use Scalaz’s typeclasses for functions (in production code no less). I just tend to think map = andThen. And the left-to-right order makes sense as it lets you read the functions in the order they happen, much as the pipe operator does:

I realise this is the other way around from the mathematical notation for function application

plusThree(timesTwo(1)) //5

but I’ve never read the haskell whitespace-based application as being equivalent to that.

I don’t think there’s any fundamental reason why fmap should be compose and not andThen (admittedly I failed Category Theory, but I don’t recall us using any function-like notation “fmap”); it just happens to be that in Haskell. I read your link as having trouble the fact that the Scalaz ordering is different from the one in LYAHFGG, not because one or the other is particularly better or worse.

I don’t think there’s any fundamental reason why fmap should be compose and not andThen (admittedly I failed Category Theory, but I don’t recall us using any function-like notation “fmap”);

Here’s my thought process: fmap is about lifting a covariant function A -> B to a Functor<A>, producing a Functor<B>. Compose (mathematically, as well as in software practice) is about calling a function g, passing the return value to a function f - or, more precisely, lifting a covariant function A -> B (f) to a function Z -> A (Functor<A>, or g), and producing a function Z -> B (Functor<B>). Compose even being possible - or at the very least implied - is arguably provable merely by the fact that functions can act as functors - that is, that they can support the notion of fmap (or lift0, if you like, although this isn’t a perfect comparison). Compose just traditionally is written in a left-to-right compositional syntax such that it’s easy (again, “easy” being subjective and not the point) to comprehend the applicative pipeline.

Conversely, andThen is about lifting a contravariant function (Z -> A) to a function A -> B (Functor<B>) to produce a function Z -> B (Functor<B>). This is a different operation entirely, which is only possible for functors which support contravariant lifting (functions being the only ones that leap to mind - I’m sure some category theorist smarter than myself could enumerate these with ease).

For these reasons, I find it ironic (and frustrating) that in modern OO languages, due to a lack of either higher-kinded types (as you point out) + parametric polymorphism, or simply more flexible method dispatch models (reversible dot notation perhaps), compose can’t be implemented as compose f x using fmap (even if the existence of compose can be implied thanks to the existence of fmap) without the punishment of reversed f and g.

I read your link as having trouble the fact that the Scalaz ordering is different from the one in LYAHFGG, not because one or the other is particularly better or worse.

The trouble isn’t just that Scalaz’s ordering is different; it’s that it’s different by necessity, due to implementing compose in terms of fmap, (the same as in Java; only Java, of course, can’t introduce a new operator to soften the blow).

For fear of continuing to beat an already pummeled dead horse, perhaps I should let it go at this point in conversation. This conversation might simply be providing evidence that only neurotic people like myself will be perturbed at these implications, and maybe it’s No Big Deal™. I do, however, find it unfortunate and worth calling out when a language’s design imposes unnecessary cognitive overhead - manifested in this case through method dispatch rules enforcing computations being performed in reverse order to their standard semantic mathematical precedents.

Hmm. You are right, as far as it goes. I make the same arguments when defending (scala’s) OO syntax as opposed to LISP-style approaches: a programming language should allow you to express operations in the language of the domain you’re modelling, and some domains (e.g. arithmetic) really do need infix operator notation if we are to express their operations “naturally”. It seems that in category theory we’ve found a domain that needs multiple dispatch to express it naturally.

But I don’t think the Haskell approach can be the answer, just because I find it so unreadable - as a non-Haskell user I can’t even parse Haskell code examples (e.g. turn them into an AST), never mind understand what they’re doing. (But of course a LISP programmer could argue that it’s unreasonably hard to turn Scala code into an AST). Maybe there’s an ideal solution out there that makes it possible to express the ∘ operator without permitting the full flexibility/unreadability of multiple dispatch. Or maybe this will always be a language design tradeoff with no “right” answer.

Thanks @jnape for inspiring my interest in this very interesting problem. I must admit, however, I am having a difficult time understanding the implications of variance in a language like Haskell, as my (very young) understanding of the idea is qualified by the requirement of mutability. When you say “Covariant from A to B”, do you mean covariant in A and contravariant in B?

Second, unless I am missing something (I’m sure I am, but I don’t believe it’s this), you are both using the notion of fixity in a strange way. Argument order and fixity AFAIK aren’t analogous… also, fmap is associative by definition, so the fixity of the compose operator is algebraically irrelevant.

When you say “Covariant from A to B”, do you mean covariant in A and contravariant in B?

This was in the context of fmap, meaning fmap :: Functor f => (a -> b) -> f a -> f b, rather than fmap :: Functor f => (b -> a) -> f a -> f b, which would be a contravariant fmap, or contramap as it is sometimes referred to.

Argument order and fixity AFAIK aren’t analogous

Your instincts are correct. Right infix plays to haskell’s lazy evaluation strengths (a great example here), so in terms of performance, it’s advantageous, but in terms of correctness, it’s irrelevant. In the Java case, however, it could likely mean that the parameter used for the current functor could be derived later at the call site from the argument, rather than up front from the method invocation, allowing fmap to be an operand of the argument, rather than the instance it’s invoked on. This is speculative, and I admittedly did a poor job explaining this. Sorry for that; don’t get hung up on it.

Right-infix

`.`

does not “feel normal”. It feels horribly confusing. The random mix of which ways operators associate is possibly the worst part of trying to read haskell.It’s obvious from the types, and from the way fmap behaves on e.g. collections, that we should have

It’s not confusing at all. And if you really want to invert the argument order like you would in Haskell, write a helper method:

Now you can’t make this abstraction as nice as you’d like because you don’t have higher-kinded types. That’s a real criticism, and something worth proposing for future Java (though even Haskell hides those behind an extension, and for good reason). But that’s not the case this article is making. That the second argument (i.e. not the distinguished “self” argument) to a method can’t magically change the behaviour of the method is a feature in Java, not a bug, and an occasionally unnatural calling order is a price well worth paying for the overall readability gain.

Thanks for the clear and thoughtful reply lmm.

The point about higher-kinded types wasn’t emphasized because up until this particular case (at least in the library I’m working on), I haven’t hit a wall where the nature of the argument order really elucidated the lack of congruence between an industry-standard OO language syntax and (what I consider to be) a de-facto functional one - in that sense, I suppose the article is more of an expedition through one of the practical scenarios in which irreconcilable differences in ergonomics really starts to show.

Interestingly (or maybe not, I thought it was), this issue of ergonomics hasn’t actually been a noticeably bad issue until Functor#fmap, due to the natural ordering of most function arguments I’ve dealt with so far being sensible or intuitively correct - i.e. filter taking the function first and then the list makes currying much more useful in the general case (although, again, I suppose even here YMMV).

However, with fmap, it was the first blatantly incongruent case that feels unintuitive to me. You can certainly argue that Haskell’s compose being right-infix is only intuitive to those who have been conditioned by it, but I’d also refer you to Java8’s own Function#compose - not as counter evidence so much, but as another well-deserving target for your aim (also, another reason that if a core Functor interface existed in the JDK, Function certainly couldn’t implement it without some terribly confusing acrobatics).

Even with the idea of a static fmap that forwards to the functor instance (which is a thoughtful idea), now we have 2 fmaps that, for MonadicFunction as the Functor, will take 2 MonadicFunctions and, based on whether they’re static or instance methods, will reverse the argument order? Hazarding this, for me, was unacceptable.

Incidentally, I probably will pull out an fmap method object that implements DyadicFunction (similar to BiFunction in Java, but also a Functor, MonadicFunction, and ProFunctor) to get simulated first-class function support along with the usual trimmings of currying, flipping, uncurrying, etc., but it will certainly preserve Functor-first argument ordering to maintain congruence with the implicit ordering in Functor.

Thanks again for the reply!

… or you just save yourself the trouble and use Scala. :-)

And now you have

twoproblems.I don’t understand the point you’re making here. In OO-land we’d write

whereas in functional-land this would be

Aren’t both cases the same? You’re right that the Haskell order is more appropriate for currying, but that’s really a Haskell-specific concern. (Indeed I’d argue that several Haskell functions adopt an “unnatural” argument order so as to best take advantage of Haskell’s language-level currying support)

I find

`Function#compose`

very confusing and would always use`Function#andThen`

instead in my own code. (Indeed, when I read code that uses`Function#compose`

, the only way I can remember which way around it goes is to first remember that`andThen`

does the obvious thing that`andThen`

sounds like it would, and that therefore`compose`

must be the opposite way around)Thanks for sticking with this, this exchange is helping me understand the failings of clarity and concision in my post - from what I gather, you are precisely the audience I was writing for.

There are two precedents I care about preserving, that, in Java’s case, cannot both be preserved:

`fmap`

is an alias for`compose`

when the functor is a function`compose`

accepts f as its first argument and g as its second - more importantly, can be read left to right (here is where it breaks down as an`fmap`

method invocation on Functor, where a covariant function`A -> B`

as its sole argument forces reverse composition.The point is that there is no way to actually mimic the left-to-right compositional style, which is frustrating if you like it (I understand your points that for you, this is not a problem, and possibly even poetic justice).

And now in haskell:

Here is where the symmetry breaks down. The reason that Java8’s

`Function#compose`

(as well as Scala, Clojure, Swift, Standard ML, etc.) works the typical way is that it’s not implemented as a call to an`#fmap`

on a`Functor`

; otherwise, it’d have the same problem.EDIT: Someone else standing with me in solidarity; seems Scalaz does the same thingBelieve it or not, I use Scalaz’s typeclasses for functions (in production code no less). I just tend to think

`map`

=`andThen`

. And the left-to-right order makes sense as it lets you read the functions in the order they happen, much as the pipe operator does:I realise this is the other way around from the mathematical notation for function application

but I’ve never read the haskell whitespace-based application as being equivalent to that.

I don’t think there’s any fundamental reason why

`fmap`

should be`compose`

and not`andThen`

(admittedly I failed Category Theory, but I don’t recall us using any function-like notation “fmap”); it just happens to be that in Haskell. I read your link as having trouble the fact that the Scalaz ordering is different from the one in LYAHFGG, not because one or the other is particularly better or worse.Here’s my thought process:

`fmap`

is about lifting a covariant function`A -> B`

to a`Functor<A>`

, producing a`Functor<B>`

. Compose (mathematically, as well as in software practice) is about calling a function`g`

, passing the return value to a function`f`

- or, more precisely, lifting a covariant function`A -> B`

(`f`

) to a function`Z -> A`

(`Functor<A>`

, or`g`

), and producing a function`Z -> B`

(`Functor<B>`

).`Compose`

even beingpossible- or at the very leastimplied- is arguably provable merely by the fact that functions can act as functors - that is, that they can support the notion of`fmap`

(or`lift0`

, if you like, although this isn’t a perfect comparison).`Compose`

just traditionally is written in a left-to-right compositional syntax such that it’s easy (again, “easy” being subjective and not the point) to comprehend the applicative pipeline.Conversely,

`andThen`

is about lifting acontravariantfunction (`Z -> A`

) to a function`A -> B`

(`Functor<B>`

) to produce a function`Z -> B`

(`Functor<B>`

). This is a different operation entirely, which is only possible for functors which support contravariant lifting (functions being the only ones that leap to mind - I’m sure some category theorist smarter than myself could enumerate these with ease).For these reasons, I find it ironic (and frustrating) that in modern OO languages, due to a lack of either higher-kinded types (as you point out) + parametric polymorphism, or simply more flexible method dispatch models (reversible dot notation perhaps),

`compose`

can’t be implemented as`compose f x`

using`fmap`

(even if the existence of`compose`

can be implied thanks to the existence of`fmap`

) without the punishment of reversed`f`

and`g`

.The trouble isn’t just that Scalaz’s ordering is different; it’s that it’s different by

necessity, due to implementing`compose`

in terms of`fmap`

, (the same as in Java; only Java, of course, can’t introduce a new operator to soften the blow).For fear of continuing to beat an already pummeled dead horse, perhaps I should let it go at this point in conversation. This conversation might simply be providing evidence that only neurotic people like myself will be perturbed at these implications, and maybe it’s No Big Deal™. I do, however, find it unfortunate and worth calling out when a language’s design imposes unnecessary cognitive overhead - manifested in this case through method dispatch rules enforcing computations being performed in reverse order to their standard semantic mathematical precedents.

Anyway, thanks for indulging me this far. :)

Hmm. You are right, as far as it goes. I make the same arguments when defending (scala’s) OO syntax as opposed to LISP-style approaches: a programming language should allow you to express operations in the language of the domain you’re modelling, and some domains (e.g. arithmetic) really do need infix operator notation if we are to express their operations “naturally”. It seems that in category theory we’ve found a domain that needs multiple dispatch to express it naturally.

But I don’t think the Haskell approach can be the answer, just because I find it so unreadable - as a non-Haskell user I can’t even

parseHaskell code examples (e.g. turn them into an AST), never mind understand what they’re doing. (But of course a LISP programmer could argue that it’s unreasonably hard to turn Scala code into an AST). Maybe there’s an ideal solution out there that makes it possible to express the ∘ operator without permitting the full flexibility/unreadability of multiple dispatch. Or maybe this will always be a language design tradeoff with no “right” answer.Thanks @jnape for inspiring my interest in this very interesting problem. I must admit, however, I am having a difficult time understanding the implications of variance in a language like Haskell, as my (very young) understanding of the idea is qualified by the requirement of mutability. When you say “Covariant from A to B”, do you mean covariant in A and contravariant in B?

Second, unless I am missing something (I’m sure I am, but I don’t believe it’s this), you are both using the notion of fixity in a strange way. Argument order and fixity AFAIK aren’t analogous… also, fmap is associative by definition, so the fixity of the compose operator is algebraically irrelevant.

This was in the context of

`fmap`

, meaning`fmap :: Functor f => (a -> b) -> f a -> f b`

, rather than`fmap :: Functor f => (b -> a) -> f a -> f b`

, which would be a contravariant`fmap`

, or`contramap`

as it is sometimes referred to.Your instincts are correct. Right infix plays to haskell’s lazy evaluation strengths (a great example here), so in terms of performance, it’s advantageous, but in terms of correctness, it’s irrelevant. In the Java case, however, it

couldlikely mean that the parameter used for the current functor could be derived later at the call site from the argument, rather than up front from the method invocation, allowing fmap to be an operand of the argument, rather than the instance it’s invoked on. This is speculative, and I admittedly did a poor job explaining this. Sorry for that; don’t get hung up on it.