1. 9
appliedscience.studio
1. 2

I think the “It’s nice to travel: try the food, learn a few phrases, visit their monuments.” comment is more apt than it sounds: tourism shows you the surface but not always the depth underneath. His examples of translating J to Clojure show this by only handling a couple of special cases of the J semantics. He replicated adding a scalar to a vector and a vector to a vector, but J’s `+` verb can also add multidimensional arrays.

`````` ]x =: i. 2 3
0 1 2
3 4 5
x + 5
5 6  7
8 9 10
x + x
0 2  4
6 8 10
``````

But there’s a much more important thing missing here: rank. The `+` verb has rank “0 0”, meaning it operates on the individual atoms of an array. But it means we can also add `1xM` vectors and `MxN` tables:

``````   0 10 + x
0  1  2
13 14 15
``````

If we actually want to add by column and not by row, we can modify the verb with `"1`:

``````   0 10 20 +"1 x
0 11 22
3 14 25
``````

This is consistent across all possible arrays. I can add a `NxM` table to a `NxMxP` tensor (a ‘brick’), or a `1xM` vector to the tensor, or an atom to the tensor, or whatever. It scales fluidly and elegantly.

The same is true for the rest of the language. His `shape` translation isn’t the same because it can’t handle `i. 2 3 4 5`, which should create a 4D array. The `/` verb would reduce over the tables in a brick, unless we told it to reduce over the columns via `/"1` or the rows via `/"2`. And this isn’t getting into the other central ideas of J, things like monadic/dyadic verbs, conjugations, gerunds, tacit programming, etc. There’s a lot more to learn from APLs than this article implies.

One last nitpick: I checked out IDL. While it has vectorized operations, it’s not an APL, and it’s not an example of “APLs have slow vector operations”. J and K both go to incredible lengths to do in-memory optimizations. See for example special combinations.