1. 26
  1.  

  2. 7

    I have been bitten by this before. My hot take is that the issue is with the sloppiness of JS: map and friends should take functions of arity 1; and calling a function with fewer arguments than it wants (with the exception of explicitly optional args) should be an error.

    1. 7

      Or the other way around, calling a function with more arguments than it expect should be an error. If map calls with (item, index, arr), then that’s fine, you’d have to do someNumbers.map((item, _, _) => toReadableNumber(item)) and all would be good

      1. 6

        Sadly it’s not possible to know how many arguments a function expects, because of the magic arguments symbol that lets you get at the arguments that you were passed regardless of what you were declared. It’s how classical JS did varargs, as I recall.

        1. 1

          ES6 and strict mode are backwards incompatible, they could have removed arguments, too (it is already banned in arrow functions).

        2. 3

          I think the common case is that you want to map over the values, so that should be the most ergonomic thing to do.

          If you need the index you can do something like this: enumerate(arr).map( ([idx, val]) => ... ) and if you want a reference to the array itself you can use a closure or something.

        3. 4

          My hot take is that the issue is with the sloppiness of JS

          Isn’t that always the issue?

          JS was always supposed to be a sloppy language. It’s great that you can slap together 50 lines or so of sloppy whatever to make your website have an interactive widget. Probably better to just do something than to just not have the website even render in the case of a mistake (I guess).

          The problem here is that JavaScript managed to escape the asylum (browser).

          1. 3

            I think the problem is that the JS committee wasn’t bolder when introducing new versions of the language (strict mode and es6 are new backwards-incompatible versions of the language; why not be a bit bolder and fix the bigger issues?).

            And I’ve never really bought the sloppy-is-good argument. I can’t think of any problem that a permissive language solves that couldn’t be solved better with slightly better tooling or something.

            1. 3

              why not be a bit bolder and fix the bigger issues?

              At the risk of sounding like a personified meme, at what point do you just throw out JavaScript and start from scratch?

              You guys were discussing errors for passing the wrong number of parameters, but that breaks EVERYTHING. Do you just eliminate the whole concept of undefined in JavaScript? Or do you just force me to explicitly pass undefined for most of the map arguments every time? (Or whatever other flexible methods exist)

              Do you fix the super-late binding of this? Or do you at least error on accessing the global this? Or at least make the class keyword do a little more magic to prevent this mistakes? It’s unbelievable to me that they added the class sugar, but you still can’t pass methods as callbacks without this getting rebound. Seriously?

              Do you completely throw away the standard library Date class (yes, you do)?

              Do you add real integers so we can actually go up to 64 bit ints?

              Can we just call ES10 a “breaking change” and trick everyone into using Scheme, instead? “Don’t worry, guys- it’s still JavaScript! It just has all these really cool parentheses now!”

              And I’ve never really bought the sloppy-is-good argument. I can’t think of any problem that a permissive language solves that couldn’t be solved better with slightly better tooling or something.

              I can’t really convince myself to buy that argument either, but it’s fairly common. I’m not a big frontend guy, so I just shrug. “Maybe it’s okay for frontend stuff?”

              But if you look at a lot of languages/systems that came into existence in the 90s or earlier, the prevailing philosophy seemed to be that crashes or errors were the worst thing you could possibly do. Look at NaN and null in places like SQL, JavaScript, ObjC, PHP, etc. I feel like programmers were the Black Knight from Monty Python and the Holy Grail and just wanted the system to keep running no matter what.

              1. 2

                I do think that the JS committee should have thrown out much more of the JS semantics, standard library and Web APIs. Having JavaScript2 that’s much closer to a more carefully designed language, like python, julia or even scheme would have been worth the retraining costs, I think, but that’s not a very informed opinion.

                That historical perspective about crash-aversion is interesting. I don’t remember that, probably before my time.

        4. 3

          I think the onus is on the library writer and not the user to mitigate and communicate such changes. They’re the ones making a breaking change in a language that idiomatically uses higher order functions prevalently in common code and in libraries.

          The only real problem here is how do authors communicate changes to library users, especially ones that just wholesale update their NPM packages for a project. The most obvious way is to create new functions, or communicate it appropriately by bumping the major version number. (Clojure’s approach to adding/updating functionality without introducing breaking changes is a good case study.)

          1. 2

            Well the author’s point still stands, even if you ignore the JS parts and talk about other languages.

            The first thing I had to think of is Clojure, where I find myself writing

            (map #(format-whatever % a b c) x)
            

            all the time, because it’s a little verbose but not ambiguous.

            1. 2

              Yes, but, implied function arity positional calling is extremely common in languages that rely on higher order functions. We have tools to handle these situations from partial application to “fully” curried functions, to placeholders for currying like Ramda’s curryN().

              If one is versed in Functional patterns and idioms, this is just par for the course, and changing arity is a known breaking change to be communicated to users. Yes, I knooooow the average JS author isn’t writing Functionally, so this may be a tall ask, but JavaScript is also a language that relies on first class functions so it should be common knowledge for library authors that changing arity is a breaking change.

              Also, just to throw in there, the only 2 languages I know that have more than one arity for the mapping function passed to map() are JavaScript and C#, so the specific example only plays out there. (Although depending on types, the C# compiler will cry as the second parameter is an index int and something else like a hacky options object is passed instead.)

              In practice, though, unit tests should capture such issues. Maybe the actual advice should be “wrap your libraries in unit tests to make sure they don’t radically change underneath you and introduce bugs.”

          2. 3

            A small nit: generally the term “callback” refers to a function that’s called by the function after some async operation. I would title this: “Don’t use functions as values unless…” or “Don’t pass functions to higher-order functions unless…”

            1. 3

              Interesting how being liberal in what you accept makes backwards compatibility more challenging. It makes sense because by being liberal you have increased the state space that you accept.

              1. 3

                I came to that conclusion when I realized that accepting extra parameters in an HTTP request means that every time you add a parameter to your API, it’s technically a breaking change. Someone used to be able to send that parameter and nothing would happen. Now something different happens.

                1. 2

                  In the CPU world, this has caused backwards compatibility issues before - someone puts garbage in a word, and those bits get used -> confusion.

                  That’s why many CPUs now not only request that unused bits must be zero, but also enforce it on the old CPUs, so when the new ones come, there’ll be nothing that tries to use unused bits.

                2. 1

                  Wow, is this ever an argument for strong typing!