The article starts with “your team hates your functional code” and goes into ways you can sneak FP past your team. It never mentions the objectively correct thing to do, which is “have a frank discussion with your team about if there’s a role for FP in the project.”
I think the problem with the article’s approach is neatly summarized by how it describes the team lead’s concerns:
You can’t write code that you know to be inferior. The way that you used to write code was more error-prone, complex, and muddled. … But that seems to be what the senior wants… or at least, what they can cope with.
They don’t consider the possibility that the senior could be right and your code isn’t better just because it’s “FP”.
TBH, I think I read this post in a more charitable way: here’s how you can retain properties or stylistic elements that you enjoy while making the code more palatable to others. The line you quoted doesn’t feel great, but I thought the article attempted (<< operable word) to find a practical middle ground. Fundamentally, it seems to be about working together as a team with people who may be actively yucking your yum. It’s not surprising that there might be a sense of frustration or projection in that context.
I think there’s an additional frustration you can find in the workplace (and life in general) where people just don’t want to change. For those of us who embrace change, these folks are sources of extreme frustration because even given evidence, they’d rather stay in their box.
It’s exhausting to accept change in all things at all times. For some folks, their current concerns lie outside of their professional lives and they’re not willing to strain their bandwidth given whatever else they’re dealing with. I sympathize with that.
I would particularly be interested in why the senior dev on my team is hesitant. In a functioning organization, they are senior because they have experienced these kinds of things before, and can speak not just to New Hotness, but boring old stuff like fitness for purpose, using their hard-earned knowledge of what has and hasn’t worked before. One of the things that your senior hopefully has is wisdom and more context. They might be a Java Head, but if they really are senior, they’ve Seen Things, and it behooves you, as a less senior member of the team, to give that weight in the decision making process.
Alternatively, they’re Senior because they have the magical arbitrary-years industry experience which means they come with a certain wage and title regardless of their ability or actual responsibilities/growth.
There’s no clue about what the input to this function is. Point-free style is generally more readable in a language like Haskell, because even when you don’t name the arguments, you can still support the readability with type annotations like renderNotifications :: [Notification] -> SomethingThatIHonestlyCantDiscernFromThisCode. And in a language like Haskell, you can hover your mouse over addIcon in your editor and it will tell you the input and output types (specialized to that call site even when addIcon itself is polymorphic) and if your types are faithful to your domain, that will give you ample clue to understand the code better.
Even though it looks like there’s no mutation here, I’d argue that there’s no ease of reasoning gained! We’re passing a list of notifications through 5 stages, and each stage has access to the complete output from the previous stage and it has the power to modify the input to the next stage completely! Isn’t this how we model the semantics of imperative programs to begin with? Each statement in an imperative program is semantically like a function that receives the full state of the program and returns a new state. This means, this code is at least as hard to follow as the equivalent imperative program that mutates the fields of these objects being passed around. But it’s actually much worse, because it goes out of its way to avoid the imperative syntactic sugar and obfuscates it even more.
An important goal of functional programming is to build your program up from pieces that require as little context as possible to be meaningful. I’d argue that functions like addIcon are directly in opposition to it. Why don’t you just have an icon here that you pass separately to formatNotification. This way, I can interact with that icon in a REPL or maybe write tests for it without having to create a Notification first. Or maybe the specific icon here depends on the contents of the notification, in that case, why not have a iconForNotification(notification) function with type Notification -> Icon, this way I can pass iconForNotification(notification) separately to formatNotification. That said, in all likelihood, you don’t actually need the entirety of a Notification in order to decide on the Icon, so it’s much better to have a function that accepts that relevant subset of Notification only.
This flow thing is also conceptually messy. The easiest way to demonstrate its messiness is to try to assign a type to it. You’ll need very advanced type system features to do that. In Haskell, the equivalent to flow(a,b,c) would be c . b . a, where . has the very simple type (b -> c) -> (a -> b) -> (a -> c). A very good example of things getting conceptually simpler when you break them down.
I think these superficial notions of functional programming are really hurting the adoption of the actual principles that functional programming aims for. It’s not that functional programmers enjoy solving puzzles in the form of writing point-free code or finding ways they can use map, fold or reduce.
The actual tenets are local reasoning and composing your program from small, independently meaningful pieces that can be composed in other ways in order to support new features, isolated tests and the ad-hoc programs you build in a REPL.
Very much agree with everything you said here - it’s easy to feel all smug and superior and all your “blub” colleagues are too dumb to understand what you’re doing, but that’s a) not a productive attitude and b) makes you the jackass.
It takes actual insight to distill what makes FP useful in the context of your language/project and make your code fit in in a way that’s a net positive. I’ve very often seen people so enamoured with higher order functions etc that they try to introduce such code into, say, a Python codebase where these things stick out like a sore thumb and are less performant and less debuggable than the “ruthlessly imperative” code that Python tends to encourage.
I also know this because I was the jackass a decade or so ago, trying to force a square peg in a round hole (which was PHP at the time when it not yet had first-class functions). At the time there was no project lead to guide me to the right path. So in a company I’d say it’s important to work with code review and have a senior developer provide feedback on all code, and ensure the code is of a uniform quality.
Even though it looks like there’s no mutation here, I’d argue that there’s no ease of reasoning gained! We’re passing a list of notifications through 5 stages, and each stage has access to the complete output from the previous stage and it has the power to modify the input to the next stage completely! Isn’t this how we model the semantics of imperative programs to begin with?
No, it isn’t. These can be written as pure functions with no side effects. The benefit is that you don’t need any more context than the contents of the function to understand what it is doing. Imperative programs have to consider side-effects, and so you require substantially more mental context to understand it.
An imperative program can be modeled as taking the whole world as input and giving a new version of the whole world as output. Which is not what happens in the functional example.
Which is not what happens in the functional example.
I think the point is that it pretty much is what happens in this functional example. Sure, technically we may have referential transparency, but in this example it is no more helpful than the referential transparency you get when you model imperative programs as taking the whole world as input and giving a new version of the whole world as output.
An imperative program can be modeled as taking the whole world as input and giving a new version of the whole world as output. Which is not what happens in the functional example.
I think the point is that it pretty much is what happens in this functional example.
You are saying here that the functional example is mutating shared state and merely disguising this fact with function composition. If that were the case, it would not be functional.
The entire premise of functional programming is statelessness. The functions are not operating on a shared scope and mutating it in place; they are accepting one type of data, and transforming it into another form as output.
If the output of one function were routing into multiple other functions, the transformations of the child functions would not impact their siblings. These transformations could take place in parallel without worrying about race conditions.
Imperative code is stateful; mutations happen within a shared context. Non-deterministic output. Parallelism is dangerous.
Functional code is stateless. There are no mutations, only copy & transform. Deterministic output. Parallelism is easy and safe.
You are saying here that the functional example is mutating shared state and merely disguising this fact with function composition.
No, that’s not what I’m saying at all. No-one else in this thread is saying that either. Here’s the original comment from the beginning of this thread:
Even though it looks like there’s no mutation here, I’d argue that there’s no ease of reasoning gained! We’re passing a list of notifications through 5 stages, and each stage has access to the complete output from the previous stage and it has the power to modify the input to the next stage completely!
No-one is claiming that there’s shared state here.
The point is that we are not gaining the principle benefit of referential transparency: we are not gaining any ease of reasoning.
There’s no clue about what the input to this function is.
A function that’s named renderNotifications quite clearly takes notifications as input.
Even though it looks like there’s no mutation here, I’d argue that there’s no ease of reasoning gained!
The main advantage of immutability is referential transparency: in its scope, a given symbol always refers to the same value.
Each statement in an imperative program is semantically like a function that receives the full state of the program and returns a new state.
As opposed to a functional program, where a function receives only what you give to it, not the whole world, making it easier to reason about what it can do.
A function that’s named renderNotifications quite clearly takes notifications as input.
If that’s a common idiom in the language, maybe. But even then, it’s not obvious that each expression returns modified copies of the input, rather than mutating the input in place. These sorts of ambiguities go away when this kind of thing is written as a sequence of imperative expressions.
This sort of default-immutability, while inarguably useful in many dimensions, makes it very difficult for languages to be competitive in terms of performance.
If the people you’re working with tell you there’s “too much magic” there’s too much magic. It’s a subjective position, and the other people on your team are the ones who get to decide. Stop trying to show how clever you are and write code people can read.
The opposite also exists. If you are told to write it in a for(…;…;…) loop so that others understand it and you think there are better ways to do it, it’s fine to judge that the team needs some extra maturity.
map, filter and reduce have been existing for a long long time. Using them instead of for loops that construct new arrays is not clever code.
The maturity of the team is not a variable that can be influenced by anyone who is writing code. At least, not in any relevant timeframe. You have to write code to the level of the team you have, not the team you wish you had. Anything else is negligence.
edit: To a point, of course. And I guess that point is largely a function of language idioms, i.e. the language should be setting the expectations. If some keyword or technique is ubiquitous in the language, then it’s appropriate; if it’s a feature offered by a third-party library then you gotta get consensus.
Stop trying to show how clever you are and write code people can read.
I think that is a pretty pessimistic interpretation of why people might write code in a particular style. (Though I don’t doubt that’s why some people might do it.) But I think most of the time it’s because they got excited about something and that distinction is important.
For example, the way you are going to respond to someone who is trying to show off or lord over others is going to be different than someone who is expressing a genuine interest.
If someone on your team is taking an interest in something new it might be because they are bored. If you treat them like they are being a jerk then you are only going to make them feel worse. Instead, it’s better to redirect that energy. Maybe they need a more challenging project or they should work with an adjacent team.
That said, someone who is excited about something new might go off the deep end if they are given a lot of freedom and a blank canvas so it’s important to help guide them in constructive and practical directions. Acknowledging interests instead of shunning them can open up space for constructive criticism.
Overall, it’s important to be kind and try to work together.
That said, someone who is excited about something new might go off the deep end if they are given a lot of freedom and a blank canvas so it’s important to help guide them in constructive and practical directions.
Exactly - someone might be so excited about the new possibilities of coding in a certain way they forget about readability in their excitement to golf it down to the most elegant point-free style. But that same programmer, after a few months looking back at their own code might cringe in horror at what they’ve wrought. If even the original author later might not understand what they wrote, it’s a terrible idea to just merge it in even if the other teammembers are total blockheads (which is highly unlikely given the type of work we do), and the tech lead would be in their rights to require modifications to make it more easy to understand.
This is obviously a bit of a tough topic because concise examples tend to not be super representative of cases where conflicts arise.
Honestly seeing this example the only thing I can wonder is whether this code replaced something like
function renderNotification(notification) {
return {
date: readableDate(notification),
icon: icon(notifincation),
notification: humanReadableDetailss(notification)
}
}
let renderedNotifications = notification.map(notifications)
The biggest challenge with taking FP techniques from ML languages is identifying the useful concepts with the patterns that exist in ML or other functionally pure languages because of the functional purity and not due to expressiveness.
If your primary goal is to actually be readable to other people, then you can totally avoid swimming upstream if you take the concepts without trying to “translate” what you would write in Haskell into JS, and still get the advantages of functional thinking (namely in not having mutation soup making control flow unclear)
The article started off real well, but then let me down big time. The concerns from the senior about readability, debuggability, and performance are legitimate; I was looking toward those arguments, possibly with concessions about when FP is not as good. Unfortunately, the author does not answer any of these concerns, blindly doubles down on the rightness of the FP approach, and instead of addressing the points, just gives tips for making the FP code stylistically more palatable.
Take performance for example. In a /r/rust thread last weekend, someone asked about the performance loss of using iterator methods for base85-encoding a byte array versus a loop+index approach. Their thinking was that the iterator methods would give the analyses more information about the problem and that the optimizer would then be able to generate faster code. Instead, as they found, the iterator version was 4x slower. In this case, since they were benchmarking against an existing solution, they could see the performance regression. But what if this had been original code? Would an encoder based on the slower iterator approach have been merged with the wrong belief that performance was as good as it could be? These are real concerns that senior engineers would have and an advocate of FP should be able to address them. (Incidentally, it’s possible to make the original code 60% faster, by going into the opposite direction: by writing more procedural code that gets the compiler to spit out fewer instructions.)
If a proponent of FP is not able to rationally discuss concerns about performance, debuggability, or readability in the team, and clings to how much better the approach is, then it’s dogma. And putting lipstick on a pig is not going to make FP be better received.
I think it’s a phase everyone needs to go through - you learn about the fantastic benefits functional programming has, and need to integrate it into your own personal style and knowledge, and hit the wall a few times to find out when not to use it. And it also requires deep insight and knowledge of the performance aspects of your particular programming environment, which only comes with experience.
Like the old adage says, “Lisp programmers know the value of everything, but the cost of nothing”.
It’s worth remembering that the same thing applies to other programming styles as well. OOP and generic metaprogramming both have fantastic advantages when they are the right tool for the job and produce a slow, unmaintainable, nightmare when they aren’t. This is why I encourage every programmer to learn at least:
Smalltalk
Haskell
Erlang
Prolog (I’d also be happy with Z3 instead these days)
C
These languages are rarely the right tool for any job but they each teach you to think in a way that gives you some useful tools for other scenarios.
There are a few comments here that bring up the question of performance. I informally benchmarked a comparison using Node 14 (V8) and an array of a million random numbers, adding 1 to each:
Immutably with a for loop: 7.782ms
Immutably with array.map(): 40.019ms
Mutably with a for loop: 3.437ms
Mutably with array.forEach(): 11.156ms
Although traditional for loops are the clear winner, in most cases, the performance bottleneck in a JavaScript app is not going to come from choosing the wrong way to iterate over an array. If only that were the case, it would be easy to find and fix. Over the 10+ years I’ve been writing JavaScript, it’s usually either something expensive relating to binding a data model to the DOM, a need to cache the results of expensive and redundant calculations in memory, or, in most cases, the size of the code itself, which can be expensive to transmit, parse, and execute and can even crash some mobile phones.
I’ve worked with some senior developers coming from other languages who claim that creating and invoking higher order functions in JavaScript is some kind of extravagant computational expense. When one is working with some massive vertex buffer or something, perhaps it is, but in most cases the size of the array is not large enough to make a perceivable difference. For compositional, testability, readability and index safety reasons (particularly when operating on nested data structures), I think the compromise advocated at the end of the article is actually a very sensible default, even if it’s prefaced by some questionable pure FP code and couched in a rebellious tone. Declaring the result of each FP-ish return value with atomic statements yields a much better debugging experience in JavaScript than either dogmatic extreme.
This is, incidentally, a thing I dislike a lot about Rust stylistically. A lot of Rust is written chaining a bunch of map()s and or_else()s and such.
It’s passable if, even as in this article, it’s a straight transform. But it rarely is. There’s often side effects (either unintentional bad ones or good ones that’d make life simpler (eg, a max/min, without using tuples on tuples))… or implicit awaiting in async version of this chaining (how many things are happening in parallel? Hope you did the right buffering call!) and… it’s just a beast to debug. A for loop would be way more obvious in a lot of cases (if for no other reason than to crack open gdb)
In my experience there’s a lot of cases when writing Rust code where you have a simple transform and do want to write it using chained maps and or_else (or even more concisely with the ? operator). When what you’re doing isn’t actually a simple transform, it’s definitely useful to resort to a non-functional construct like a for-loop, but that’s ideally the heavier and less common thing to do.
I have no idea whether iterator chains or a for-loop is easier to optimize by the compiler - I’ve seen deep-dives into rustc compiler internals that have argued that iterator chains are actually faster at least sometimes, and I think the broader problem is that it’s difficult for a programmer to actually know which of several semantically-equivalent ways of writing a program will actually result in the most performant code.
This lines up with my Java intuition as well, although there are so many different Java styles that I don’t claim it’s a universal thing other Java programmers would agree with. If something is doing explicit for loops to transform a collection into another collection, my assumption is either: 1) it’s legacy pre-Java-8 code, written before java.util.stream existed, or 2) it’s doing complex or wonky enough logic that it doesn’t map nicely onto the standard operations, so really does need to drop down to a custom loop.
A lot of Rust is written chaining a bunch of map()s and or_else()s and such.
I used to do this a lot. The APIs are there. It’s so tempting when there’s a function like map_or_else that looks like it was designed to solve your exact problem. But I think it’s a trap, and as you start writing it often becomes more complicated than anticipated.
These days, I am more skeptical of the ‘functional’ style in Rust. I rely more on language features like match, ?, traits like From/TryFrom, and libraries like anyhow or serde. I couldn’t tell you why, but this is how I feel after using Rust for a couple years.
Yeah, we write a lot of functional heavily chained code in D, but it’s viable (IMO) because all our value types are immutable. No state, pure functions only. There’s a few keywords that are allowed to do mutation (each, sort), but we know to look out for them.
you have a positive attitude towards functional programming
your team doesn’t
Both sides rationalize in different directions:
you think challenging yourself is valuable and the pain it creates is secondary
your team only sees pain and they look for justifications to hide the fact the decision is already taken
Bottom line: both make a decision based on instinct. The difference is that one enjoys learning while the other don’t, and also cannot tolerate someone who wants to learn.
As the size of a group of people grows, every property of that group moves toward the mean. Classifications like that one subset of the group enjoys learning and the other doesn’t aren’t IMO productive, because they aren’t metrics that can be influenced by the people writing the code. Engineering can certainly influence hiring, but when you’re writing code, your audience is your colleagues as they are, with all of their strengths and weaknesses and everything else, and not the colleagues that you wish you had.
Quoting myself from the Reddit post on this a few days ago:
Unless you get some golden opportunity to rewrite the application, find a new team or a new job, because it really is going to degraded the cohesiveness of a code base to have many competing styles. Ideally, you’d move to a language where the functional ergonomics are easier, first-class, and encouraged; it’ll be harder to have a fragmented code base when there’s only one style in town. Nitpicking on whether or not to use lens operators is a completely different ballpark than seeing composition being questioned in a merge request like the author stated.
it really is going to degraded the cohesiveness of a code base to have many competing styles
Perhaps this is also the reason I dislike it when languages add new ill-fitting features just because other languages have them (like for instance async/await or type annotations to Python, or classes to JavaScript).
For many people, though, using pipe() and flow() at all can be a problem. They might simply have never come across this kind of thing before and not understand how it works.
This feels like the crux of the issue. When many people don’t understand how something works, that’s not a problem with those people, it’s a problem with the thing. You have to speak to folks where they are, not where you want them to be. And in that framing, you have to understand that code is not an equation, or a clever application of mathematical principles — it’s a recipe, followed step-by-step, by human programmers with limited cognitive capacity. You gotta design to that reality.
So like
// `render` is a verb that transforms X to Y, it doesn't modify X, thus...
render(ns notifications) -> rendered_notifications // ...it returns a result
ns_with_date = add_readable_date(ns) // as should all interior operations
ns_with_icon = add_icon(ns_with_date) // (could re-use a stack var for this)
...
return rendered_form_of(ns_with_changes)
When many people don’t understand how something works, that’s not a problem with those people, it’s a problem with the thing.
Arguably, it could also be a simple matter of introducing the thing to the team once and then reaping the benefits by using it (and having your teammates use it!) where applicable. For instance, if the language or framework you’re using adds a new feature, you wouldn’t eschew it just because nobody on the team has ever used it before.
But certain things can be more difficult to read even if you do grok how it works, especially if it’s not idiomatic to the language or project you’re working with. So, of course, you’ve got to balance overall readability and maintainability with expressiveness.
The article starts with “your team hates your functional code” and goes into ways you can sneak FP past your team. It never mentions the objectively correct thing to do, which is “have a frank discussion with your team about if there’s a role for FP in the project.”
I think the problem with the article’s approach is neatly summarized by how it describes the team lead’s concerns:
They don’t consider the possibility that the senior could be right and your code isn’t better just because it’s “FP”.
TBH, I think I read this post in a more charitable way: here’s how you can retain properties or stylistic elements that you enjoy while making the code more palatable to others. The line you quoted doesn’t feel great, but I thought the article attempted (<< operable word) to find a practical middle ground. Fundamentally, it seems to be about working together as a team with people who may be actively yucking your yum. It’s not surprising that there might be a sense of frustration or projection in that context.
I think there’s an additional frustration you can find in the workplace (and life in general) where people just don’t want to change. For those of us who embrace change, these folks are sources of extreme frustration because even given evidence, they’d rather stay in their box.
It’s exhausting to accept change in all things at all times. For some folks, their current concerns lie outside of their professional lives and they’re not willing to strain their bandwidth given whatever else they’re dealing with. I sympathize with that.
I would particularly be interested in why the senior dev on my team is hesitant. In a functioning organization, they are senior because they have experienced these kinds of things before, and can speak not just to New Hotness, but boring old stuff like fitness for purpose, using their hard-earned knowledge of what has and hasn’t worked before. One of the things that your senior hopefully has is wisdom and more context. They might be a Java Head, but if they really are senior, they’ve Seen Things, and it behooves you, as a less senior member of the team, to give that weight in the decision making process.
Alternatively, they’re Senior because they have the magical arbitrary-years industry experience which means they come with a certain wage and title regardless of their ability or actual responsibilities/growth.
This is distressingly the case, more often these days, yes.
This is just bad functional code to be honest (and all the variations suggested in the article):
Here’s what’s bad about it:
renderNotifications :: [Notification] -> SomethingThatIHonestlyCantDiscernFromThisCode
. And in a language like Haskell, you can hover your mouse overaddIcon
in your editor and it will tell you the input and output types (specialized to that call site even whenaddIcon
itself is polymorphic) and if your types are faithful to your domain, that will give you ample clue to understand the code better.addIcon
are directly in opposition to it. Why don’t you just have anicon
here that you pass separately toformatNotification
. This way, I can interact with thaticon
in a REPL or maybe write tests for it without having to create aNotification
first. Or maybe the specificicon
here depends on the contents of the notification, in that case, why not have aiconForNotification(notification)
function with typeNotification -> Icon
, this way I can passiconForNotification(notification)
separately toformatNotification
. That said, in all likelihood, you don’t actually need the entirety of aNotification
in order to decide on theIcon
, so it’s much better to have a function that accepts that relevant subset ofNotification
only.flow
thing is also conceptually messy. The easiest way to demonstrate its messiness is to try to assign a type to it. You’ll need very advanced type system features to do that. In Haskell, the equivalent toflow(a,b,c)
would bec . b . a
, where.
has the very simple type(b -> c) -> (a -> b) -> (a -> c)
. A very good example of things getting conceptually simpler when you break them down.I think these superficial notions of functional programming are really hurting the adoption of the actual principles that functional programming aims for. It’s not that functional programmers enjoy solving puzzles in the form of writing point-free code or finding ways they can use
map
,fold
orreduce
.The actual tenets are local reasoning and composing your program from small, independently meaningful pieces that can be composed in other ways in order to support new features, isolated tests and the ad-hoc programs you build in a REPL.
Very much agree with everything you said here - it’s easy to feel all smug and superior and all your “blub” colleagues are too dumb to understand what you’re doing, but that’s a) not a productive attitude and b) makes you the jackass.
It takes actual insight to distill what makes FP useful in the context of your language/project and make your code fit in in a way that’s a net positive. I’ve very often seen people so enamoured with higher order functions etc that they try to introduce such code into, say, a Python codebase where these things stick out like a sore thumb and are less performant and less debuggable than the “ruthlessly imperative” code that Python tends to encourage.
I also know this because I was the jackass a decade or so ago, trying to force a square peg in a round hole (which was PHP at the time when it not yet had first-class functions). At the time there was no project lead to guide me to the right path. So in a company I’d say it’s important to work with code review and have a senior developer provide feedback on all code, and ensure the code is of a uniform quality.
No, it isn’t. These can be written as pure functions with no side effects. The benefit is that you don’t need any more context than the contents of the function to understand what it is doing. Imperative programs have to consider side-effects, and so you require substantially more mental context to understand it.
How do you model the semantics of imperative programs then?
An imperative program can be modeled as taking the whole world as input and giving a new version of the whole world as output. Which is not what happens in the functional example.
I think the point is that it pretty much is what happens in this functional example. Sure, technically we may have referential transparency, but in this example it is no more helpful than the referential transparency you get when you model imperative programs as taking the whole world as input and giving a new version of the whole world as output.
Sure, you do lose the benefits of FP if you assume that the code sample has side-effects and therefore is not doing FP.
This is a circular argument.
What? I think it’s clear there aren’t meant to be any side-effects there.
You are saying here that the functional example is mutating shared state and merely disguising this fact with function composition. If that were the case, it would not be functional.
The entire premise of functional programming is statelessness. The functions are not operating on a shared scope and mutating it in place; they are accepting one type of data, and transforming it into another form as output.
If the output of one function were routing into multiple other functions, the transformations of the child functions would not impact their siblings. These transformations could take place in parallel without worrying about race conditions.
Imperative code is stateful; mutations happen within a shared context. Non-deterministic output. Parallelism is dangerous.
Functional code is stateless. There are no mutations, only copy & transform. Deterministic output. Parallelism is easy and safe.
No, that’s not what I’m saying at all. No-one else in this thread is saying that either. Here’s the original comment from the beginning of this thread:
No-one is claiming that there’s shared state here.
The point is that we are not gaining the principle benefit of referential transparency: we are not gaining any ease of reasoning.
I’ve given ample reasons why this is not the case. Pure functions are easier to reason about than imperative functions by definition.
A function that’s named
renderNotifications
quite clearly takes notifications as input.The main advantage of immutability is referential transparency: in its scope, a given symbol always refers to the same value.
As opposed to a functional program, where a function receives only what you give to it, not the whole world, making it easier to reason about what it can do.
If that’s a common idiom in the language, maybe. But even then, it’s not obvious that each expression returns modified copies of the input, rather than mutating the input in place. These sorts of ambiguities go away when this kind of thing is written as a sequence of imperative expressions.
This sort of default-immutability, while inarguably useful in many dimensions, makes it very difficult for languages to be competitive in terms of performance.
If the people you’re working with tell you there’s “too much magic” there’s too much magic. It’s a subjective position, and the other people on your team are the ones who get to decide. Stop trying to show how clever you are and write code people can read.
The opposite also exists. If you are told to write it in a for(…;…;…) loop so that others understand it and you think there are better ways to do it, it’s fine to judge that the team needs some extra maturity.
map, filter and reduce have been existing for a long long time. Using them instead of for loops that construct new arrays is not clever code.
The maturity of the team is not a variable that can be influenced by anyone who is writing code. At least, not in any relevant timeframe. You have to write code to the level of the team you have, not the team you wish you had. Anything else is negligence.
edit: To a point, of course. And I guess that point is largely a function of language idioms, i.e. the language should be setting the expectations. If some keyword or technique is ubiquitous in the language, then it’s appropriate; if it’s a feature offered by a third-party library then you gotta get consensus.
I think that is a pretty pessimistic interpretation of why people might write code in a particular style. (Though I don’t doubt that’s why some people might do it.) But I think most of the time it’s because they got excited about something and that distinction is important.
For example, the way you are going to respond to someone who is trying to show off or lord over others is going to be different than someone who is expressing a genuine interest.
If someone on your team is taking an interest in something new it might be because they are bored. If you treat them like they are being a jerk then you are only going to make them feel worse. Instead, it’s better to redirect that energy. Maybe they need a more challenging project or they should work with an adjacent team.
That said, someone who is excited about something new might go off the deep end if they are given a lot of freedom and a blank canvas so it’s important to help guide them in constructive and practical directions. Acknowledging interests instead of shunning them can open up space for constructive criticism.
Overall, it’s important to be kind and try to work together.
Exactly - someone might be so excited about the new possibilities of coding in a certain way they forget about readability in their excitement to golf it down to the most elegant point-free style. But that same programmer, after a few months looking back at their own code might cringe in horror at what they’ve wrought. If even the original author later might not understand what they wrote, it’s a terrible idea to just merge it in even if the other teammembers are total blockheads (which is highly unlikely given the type of work we do), and the tech lead would be in their rights to require modifications to make it more easy to understand.
This is obviously a bit of a tough topic because concise examples tend to not be super representative of cases where conflicts arise.
Honestly seeing this example the only thing I can wonder is whether this code replaced something like
The biggest challenge with taking FP techniques from ML languages is identifying the useful concepts with the patterns that exist in ML or other functionally pure languages because of the functional purity and not due to expressiveness.
If your primary goal is to actually be readable to other people, then you can totally avoid swimming upstream if you take the concepts without trying to “translate” what you would write in Haskell into JS, and still get the advantages of functional thinking (namely in not having mutation soup making control flow unclear)
The article started off real well, but then let me down big time. The concerns from the senior about readability, debuggability, and performance are legitimate; I was looking toward those arguments, possibly with concessions about when FP is not as good. Unfortunately, the author does not answer any of these concerns, blindly doubles down on the rightness of the FP approach, and instead of addressing the points, just gives tips for making the FP code stylistically more palatable.
Take performance for example. In a /r/rust thread last weekend, someone asked about the performance loss of using iterator methods for base85-encoding a byte array versus a loop+index approach. Their thinking was that the iterator methods would give the analyses more information about the problem and that the optimizer would then be able to generate faster code. Instead, as they found, the iterator version was 4x slower. In this case, since they were benchmarking against an existing solution, they could see the performance regression. But what if this had been original code? Would an encoder based on the slower iterator approach have been merged with the wrong belief that performance was as good as it could be? These are real concerns that senior engineers would have and an advocate of FP should be able to address them. (Incidentally, it’s possible to make the original code 60% faster, by going into the opposite direction: by writing more procedural code that gets the compiler to spit out fewer instructions.)
If a proponent of FP is not able to rationally discuss concerns about performance, debuggability, or readability in the team, and clings to how much better the approach is, then it’s dogma. And putting lipstick on a pig is not going to make FP be better received.
I think it’s a phase everyone needs to go through - you learn about the fantastic benefits functional programming has, and need to integrate it into your own personal style and knowledge, and hit the wall a few times to find out when not to use it. And it also requires deep insight and knowledge of the performance aspects of your particular programming environment, which only comes with experience.
Like the old adage says, “Lisp programmers know the value of everything, but the cost of nothing”.
It’s worth remembering that the same thing applies to other programming styles as well. OOP and generic metaprogramming both have fantastic advantages when they are the right tool for the job and produce a slow, unmaintainable, nightmare when they aren’t. This is why I encourage every programmer to learn at least:
These languages are rarely the right tool for any job but they each teach you to think in a way that gives you some useful tools for other scenarios.
There are a few comments here that bring up the question of performance. I informally benchmarked a comparison using Node 14 (V8) and an array of a million random numbers, adding 1 to each:
array.map()
: 40.019msarray.forEach()
: 11.156msAlthough traditional for loops are the clear winner, in most cases, the performance bottleneck in a JavaScript app is not going to come from choosing the wrong way to iterate over an array. If only that were the case, it would be easy to find and fix. Over the 10+ years I’ve been writing JavaScript, it’s usually either something expensive relating to binding a data model to the DOM, a need to cache the results of expensive and redundant calculations in memory, or, in most cases, the size of the code itself, which can be expensive to transmit, parse, and execute and can even crash some mobile phones.
I’ve worked with some senior developers coming from other languages who claim that creating and invoking higher order functions in JavaScript is some kind of extravagant computational expense. When one is working with some massive vertex buffer or something, perhaps it is, but in most cases the size of the array is not large enough to make a perceivable difference. For compositional, testability, readability and index safety reasons (particularly when operating on nested data structures), I think the compromise advocated at the end of the article is actually a very sensible default, even if it’s prefaced by some questionable pure FP code and couched in a rebellious tone. Declaring the result of each FP-ish return value with atomic statements yields a much better debugging experience in JavaScript than either dogmatic extreme.
This is, incidentally, a thing I dislike a lot about Rust stylistically. A lot of Rust is written chaining a bunch of map()s and or_else()s and such.
It’s passable if, even as in this article, it’s a straight transform. But it rarely is. There’s often side effects (either unintentional bad ones or good ones that’d make life simpler (eg, a max/min, without using tuples on tuples))… or implicit awaiting in async version of this chaining (how many things are happening in parallel? Hope you did the right buffering call!) and… it’s just a beast to debug. A for loop would be way more obvious in a lot of cases (if for no other reason than to crack open gdb)
In my experience there’s a lot of cases when writing Rust code where you have a simple transform and do want to write it using chained maps and or_else (or even more concisely with the
?
operator). When what you’re doing isn’t actually a simple transform, it’s definitely useful to resort to a non-functional construct like a for-loop, but that’s ideally the heavier and less common thing to do.Why would a for loop be “heavier” than a chain of transforms?
If anything, the for loop is easier to optimize by the compiler, and easier to understand by the layperson, right?
I have no idea whether iterator chains or a for-loop is easier to optimize by the compiler - I’ve seen deep-dives into rustc compiler internals that have argued that iterator chains are actually faster at least sometimes, and I think the broader problem is that it’s difficult for a programmer to actually know which of several semantically-equivalent ways of writing a program will actually result in the most performant code.
This lines up with my Java intuition as well, although there are so many different Java styles that I don’t claim it’s a universal thing other Java programmers would agree with. If something is doing explicit
for
loops to transform a collection into another collection, my assumption is either: 1) it’s legacy pre-Java-8 code, written beforejava.util.stream
existed, or 2) it’s doing complex or wonky enough logic that it doesn’t map nicely onto the standard operations, so really does need to drop down to a custom loop.I used to do this a lot. The APIs are there. It’s so tempting when there’s a function like
map_or_else
that looks like it was designed to solve your exact problem. But I think it’s a trap, and as you start writing it often becomes more complicated than anticipated.These days, I am more skeptical of the ‘functional’ style in Rust. I rely more on language features like
match
,?
, traits likeFrom
/TryFrom
, and libraries likeanyhow
orserde
. I couldn’t tell you why, but this is how I feel after using Rust for a couple years.I agree. Chaining in a single expression is usually less readable than a sequence of independent expressions.
Yeah, we write a lot of functional heavily chained code in D, but it’s viable (IMO) because all our value types are immutable. No state, pure functions only. There’s a few keywords that are allowed to do mutation (
each
,sort
), but we know to look out for them.They are probably right and your code is probably bad.
Something is clear to me:
Both sides rationalize in different directions:
Bottom line: both make a decision based on instinct. The difference is that one enjoys learning while the other don’t, and also cannot tolerate someone who wants to learn.
As the size of a group of people grows, every property of that group moves toward the mean. Classifications like that one subset of the group enjoys learning and the other doesn’t aren’t IMO productive, because they aren’t metrics that can be influenced by the people writing the code. Engineering can certainly influence hiring, but when you’re writing code, your audience is your colleagues as they are, with all of their strengths and weaknesses and everything else, and not the colleagues that you wish you had.
Quoting myself from the Reddit post on this a few days ago:
Perhaps this is also the reason I dislike it when languages add new ill-fitting features just because other languages have them (like for instance async/await or type annotations to Python, or classes to JavaScript).
This got me curious. Does V8 or other JS engine optimize .map calls with fusion?
I don’t think so because it would mess up the ordering of side effects.
No. Stream fusion is still a pretty unique capability of Haskell.
My thought exactly. Without some form of lazyness, this code is really suspicious to begin with
This feels like the crux of the issue. When many people don’t understand how something works, that’s not a problem with those people, it’s a problem with the thing. You have to speak to folks where they are, not where you want them to be. And in that framing, you have to understand that code is not an equation, or a clever application of mathematical principles — it’s a recipe, followed step-by-step, by human programmers with limited cognitive capacity. You gotta design to that reality.
So like
Arguably, it could also be a simple matter of introducing the thing to the team once and then reaping the benefits by using it (and having your teammates use it!) where applicable. For instance, if the language or framework you’re using adds a new feature, you wouldn’t eschew it just because nobody on the team has ever used it before.
But certain things can be more difficult to read even if you do grok how it works, especially if it’s not idiomatic to the language or project you’re working with. So, of course, you’ve got to balance overall readability and maintainability with expressiveness.