I feel like (no substantive experience though), is that with all the explosion of linear algebra code, due to machine learning – APL family of language was best suited to express what’s written in books with LaTeX.

But likely a number of historical, commercial, and usability/accessibility reasons made them much less accessible/attractive.

Also the current trend toward compile-time expressions, monads hiding data type intricacies and their interaction with external functions, and lambda expressions – combined, are giving us the ‘mathiness’ of what’s already available in J.

I sometimes use APL as a modeling language for what I will ultimately implement in Python with numpy or pandas. It helps me to frame the problem in terms of data, representation, and notation rather than functions and classes.

Many of the APL primitives have equivalents in numpy routines, so once I have a satisfactory APL solution, porting it to Python is pretty simple. There are a few gaps though, like the power operator used in k-means. The numpy API is so large it can be hard to even know whether there’s an equivalent to some APL primitive.

This isn’t an exorbitant cost though. The missing APL function can usually be written in just a couple lines of Python code.

Honestly, I think I like APL-style J better than APL itself. It looks much cleaner.

J has a very ‘mathy’ coding style. Where by mathy I mean:

in the main body of the algorithms.

By comparasing, this is the Python with panda implementation of the k-means (albeit slightly more complex) https://github.com/jackmaney/k-means-plus-plus-pandas/blob/master/k_means_plus_plus/cluster.py

I feel like (no substantive experience though), is that with all the explosion of linear algebra code, due to machine learning – APL family of language was best suited to express what’s written in books with LaTeX.

But likely a number of historical, commercial, and usability/accessibility reasons made them much less accessible/attractive.

Also the current trend toward compile-time expressions, monads hiding data type intricacies and their interaction with external functions, and lambda expressions – combined, are giving us the ‘mathiness’ of what’s already available in J.

I sometimes use APL as a modeling language for what I will ultimately implement in Python with numpy or pandas. It helps me to frame the problem in terms of data, representation, and notation rather than functions and classes.

Many of the APL primitives have equivalents in numpy routines, so once I have a satisfactory APL solution, porting it to Python is pretty simple. There are a few gaps though, like the power operator used in k-means. The numpy API is so large it can be hard to even know whether there’s an equivalent to some APL primitive.

This isn’t an exorbitant cost though. The missing APL function can usually be written in just a couple lines of Python code.