1. 9
  1.  

  2. 4

    I really want to love J. I wrote about it here.

    The long and short of my feelings is that all of the purported benefits of J could be achieved by a similar but profoundly more regular and comprehensible language and that in practice, even relative clunkers like Python and R are in fact faster for many of the tasks for which J is optimized.

    Sure, it gives one a thrill to write spectral clustering in 5 obscure lines of magic, but is that thrill worth all the hassle?

    1. 5

      Your complaints about J are pretty similar to mine, so I’ll point out that my new language BQN addresses most of them. For example here’s the documentation on lexical scoping and closures, and it has sane list literals as well. It definitely remains in the APL universe, with only 1 or 2 function arguments allowed and a second level of functions called modifiers (but functions and modifiers are first class). And it promotes tacit programming, which you sound kind of ambivalent about (isn’t everyone?). But the supported combinators and the glyphs they use are different, and several users coming from other array languages have said that BQN feels like a big improvement or even a completely different experience in this area. The implementation is still kind of immature, with some missing features and less than top-tier performance (although it will thrash other array languages at scalar computation due to bytecode compilation and primitive JIT), but it’s definitely suitable for scripting and medium-sized data mangling at this point.

      1. 1

        BQN is beautiful.

        1. 1

          Have you considered writing a BQN transpiler to R or Python so people can use the available libraries? Almost everything I do as a data scientist involves shuffling and otherwise preparing data for use with already implemented algorithms. The fact that J is missing a lot of library stuff (though it does have an OK R interface) is a big part of the reason I don’t use it.

          But I would also consider using J really anti-social.

          1. 1

            Yes, and in fact I did most of a NumPy embedding a while ago. I decided I should probably get the compiler more stable before making a bunch of embedded versions that I have to modify whenever the interface changes. Just yesterday I started seriously working on header support, which is the last big missing feature. After that I’ll look at updating the NumPy version and implementing in Julia. R is probably possible as well. But there are always a lot of things to do so it could be a while before I get to this.

        2. 3

          I find it interesting that you’re complaints are very different from my complaints! Some overlap, but I didn’t consider things like the block syntax.

          Sure, it gives one a thrill to write spectral clustering in 5 obscure lines of magic, but is that thrill worth all the hassle?

          IMO the best “practical use” of J is as a desktop supercalculator. It’s for when I need to know the size of a symmetry reduction or the distances between roots of a polynomial. I am continually surprised at how often really weird use cases come up for me.

          1. 2

            Out of curiosity, what do you do for a living? When I need to desktop calculate I typically use Emacs Lisp or R.

            1. 2

              I’m a formal methods consultant.

          2. 1

            I love J, and thought your article was a good and thoughtful critique.

            The only part that felt unfair to me was the discussion of double-bonded verbs. Calling a bonded verb dyadically is a special shortcut form of the power of operator ^::

            x m&v y ↔ m&v^:x y
            x u&n y ↔ u&n^:x y
            

            (from https://www.jsoftware.com/help/dictionary/d630n.htm)

            This explains the behavior of all the weird examples. As such, there is no question of them seeming right or goofy or having any hope of being deciphered. Either you’re aware of the shortcut or not.