1. 22
    1. 17

      On top of this, I don’t think JavaScript has iterator fusion, so chaining maps and filters should be less efficient than using a for loop. Using jsbenchmark:

      # setup
      let DATA = [...Array(10000).keys()]
      
      # 9k ops/sec
      DATA.filter((x) => x % 2 == 0).map((x) => x ** 2).filter((x) => x > 1e6)
      
      # 18k ops/sec
      out = []
      for (let entry of DATA) {
        if (entry % 2 == 0) {
          let x = entry ** 2;
          if (x > 1e6) {
            out.push(x)
          }
        }
      }
      
      1. 4

        I hadn’t even thought iterator fusion yet, thanks, interesting!

        But in many contexts the readability and succinctness reasons to use map and filter combinations do start weighing in compared to raw loop performance.

        1. 3

          Yes, filters after maps don’t fuse, also there’s no deforestation so each map is a new allocation. (While I’m sure JS engines do their best to reduce this, as it’s a very popular pattern.)

          1. 3

            It also tends to be the way people write React components. Just filter(...).map(...) your array to JSX elements.

            1. 2

              Yes to all of this, but I would also caution against over-policing filter/map in a component context on performance grounds. If it’s more readable to use a for loop, great. But if you’ve reached the point where there are so many elements that using filter/map to instantiate them as components is significantly different from using a for loop, you have a much bigger problem than iteration style. Browsers tend to get rather sickly when you reach that many components, particularly when they’re interactive.

          2. 3

            Thank you, that site is very handy

          3. 11

            For whatever it’s worth, this reduce is roughly 4-6x faster than the author’s flatten, as well as significantly easier to understand

            list_of_lists.reduce((acc, lst) => lst.concat(acc), [])

            Though it’s still way slower than the for loop

            And then this reduce, which is equivalent to their for loop, is just a tiny hair slower than it!

            list_of_lists.reduce((acc, lst) => { acc.push(...lst); return acc } , [])

            I suspect the difference is that [...acc, ...lst] (which has to unpack two arrays to make a third) is significantly slower than acc.push(...lst); (which only has to unpack one array and append to one array).

            So I don’t think “for loops are faster than reduces” is the relevant performance advice here, but “avoid allocations they’re expensive”

            (Though I think probably generally the performance point does hold!)

            jsbenchmark link

            1. 2

              Ah yes I forgot about concat!

              But I do want to say I explicitly highlight the reduce with the performance of the for loop in this article!

              let flat = list_of_lists.reduce((accumulator, list) => {
                  accumulator.push(...list);
                  return accumulator;
              }, []);
              

              This code has indeed about the same performance as the for loop, but it destroys the “but this code is declarative” argument. It becomes more difficult to understand what’s going on. Mutating the argument could in fact risk breaking the algorithm, and reasoning about that takes mental effort I don’t want to be spending.

              1. 2

                jeeze, I did read your article and I guess I just forgot!

                Anyway, enjoyed the article, thanks

              2. 2

                “avoid allocations they’re expensive”

                Sadly in javascript the two are intrinsically linked since the HOFs are array-based, so allocations are a built-in inefficiency of functional pipelines (on collections, using the builtins, and if you rewrite the builtin operations using lazy iterators I don’t know how well the runtimes optimise those)

              3. 7

                I generally recommend developers at work to use for-of loops in JS, rather than map, filter, reduce. Despite being mainly a functional programmer outside of work.

                There’s a few things, similar to what the post mentions:

                • map, filter, reduce are often used with short arrow functions. That’s easy enough to maintain when short, but over time as the function body grows, often un-named and un-typed functions end up being given as arguments to map, or reduce in particular.
                • The logic is usually chained, .map().filter(), when the most optimal approach would be to do them in one iteration, rather than two.
                • Often loops will want to be broken out of early, which is easiest to do via continue. I’ve seen lots of accidental performance leaks because a developer used map().filter() when they only needed to calculate one result.
                • Sometimes .map() is used instead of .forEach, and I find it’s much cleaner to identify what is being done with each element if a normal for..of loop is used.

                All that being said, I also do think for..of has a naming problem. for..in is too close, too easy to mistakenly use (e.g if you have been writing Python lately). There’s also the case when it comes to promises - if you have an array of promises, it’s more efficient to use .map with Promise.all rather than using for..of, which will block on each awaited element. It’s another place where I’ve seen quick performance wins.

                1. 2

                  break and return from for loops are indeed another benefit I don’t discuss in this article, though I do touch upon this in the follow up.

                2. 7

                  I’ll admit, I like array methods enough to have gotten kind of obnoxious with reducers over the years. I especially preferred them when JavaScript didn’t yet have for…of. I realize now that unless you’re doing something trivial like sum, JavaScript reducers aren’t very readable for most people. Even sum can be messed up by passing an empty array and no seed value into the reducer.

                  With that mea culpa out of the way, I posit the following counterargument: One can spend an eternity correcting the iteration style of one’s colleagues only to find that the actual performance bottlenecks are in array destructuring (more expensive than you think and used in the examples here), un-memoized functions, over-eager effects, and, of course, dependency bloat. Then, when someone adds some function calls inside a really big for loop (either because it’s getting too big to read easily or because they have to use code from an external module), the performance difference evaporates.

                  There are merits to the arguments in this article, especially the readability ones. I would just caution developers not to impose an iteration style dogma on a JavaScript codebase on performance grounds. In most cases, there aren’t nearly enough iterations for this to be the performance cost you’re looking for. If you’re going to optimize anything up front, it should be holistic, like where you do which computations, when, in what runtime, and how it all is supposed to hang together.

                  1. 2

                    I completely agree that this shouldn’t become some silly best practice, which I why I wrote:

                    In other contexts using reduce makes complex code a lot more easy to implement and understand. And objects (when not used as a hash table) are really easy to combine, as long as you don’t mutate them. Even if a for loop version were a lot faster, that very well may not be worth it in your context.

                    My goal is to help develop people’s intuitions about when for is handy (when performance and readability both point in its direction). Constructing different-sized collections from other collections in particular is when for loops should be considered. It’s not to tell people to always use for loops - I very much disagree with that position!

                  2. 5

                    I think everyone goes a through a phase where they try to use reduce for everything.

                    1. 4

                      No matter the language. I’m not ashamed to use filter and map where it suits me and for-loops where it doesn’t.

                      I suppose my only code that is “pure” is in languages where either map/filter is very annoying to do (e.g. PHP), nonexistant and I don’t use for-loops if they suck (e.g. clojure)

                      Kotlin is especially bad here (for me) as I’m just learning it and still use a lot of for like in Java or.. by default, but most things can be neatly done with filter/map. Eh, whatever. I guess I should improve.