1. 41
  1.  

  2. 12

    Dijkstra may have seen this too. Look at the tail end of this quote.

    “APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums.” - Edsger W.Dijkstra

      1. 11

        Dijkstra is the battle rapper of computer scientists

        1. 2

          Imagining him battling Alan Perlis.

    1. 6

      Smalltalk has a tool where methods can be found using example values. And Smalltalk has many other tools for discovery. Other languages could provide similar capabilities, but rarely do.

      1. 6

        Smalltalk proper doesn’t, though Squeak does (and I assume Pharo, also). But that’s largely a toy: MethodFinder only goes through a list of pre-selected methods to avoid having it accidentally run e.g. Object>>halt or Smalltalk class>>saveAndQuit:, so it’s generally only something I suggest to people who are very new to Squeak. And even there, I hesitate. It’s a pre-screened list, so you may not discover everything and end up reimplementing a method that already exists. And some things are just not discoverable through that interface: there is no conceivable way of describing “I want six random numbers between 17 and 34,” even though Squeak has methods for that.

        The real strength of Smalltalk is the same as the one hwayne identified in the context of Python: the standard library is well-organized, even grouping methods within a given class into useful categories. Combined with its excellent IDE, you have a real likelihood of discovering what functionality exists and finding an example of practical usage within the image.

        MethodFinder can be a part of that discoverability, in a very narrow way, but I really feel as if it’s more an antipattern and a crutch and a genuinely useful tool.

        1. -7

          Tedious Lobster debate straight ahead. I’d rather quit Lobsters than go through such a thing one more time. Bye.

          1. 11

            I honestly wasn’t looking for a debate; I was just assuming that most people here wouldn’t have used MethodFinder or known how it worked, and I didn’t want to paint it as more than it was. I’m sorry that you’ve decided to leave the community, and I hope you return.

            1. 7

              Don’t blame yourself too much, people who leave like that have likely been on the edge of leaving for quite some time. This was likely the straw that broke the camel’s back.

              1. 7

                His positions were outliers in a lot of the discussions which people either disagreed with or lacked historical information to fully understand. That he brought those here was a good thing. That they’d get argued with a lot was inevitable since he was going against the flow. Watching it, I was concerned he might think of leaving some time ago since the comments rarely got much positive feedback. It definitely built up over time.

              2. 2

                Don’t mind him. That was a good comment. I haven’t touched Squeak in years, but it was one of the very first apps I made an RPM package for back around Red Hat 6.0. In fact, there’s even still a dead links to it on their wiki.

                I didn’t play around with it much, but I do remember it did have that API browser. Comments like yours are important because you break down the thing discussed, talked about your experiences with it, it’s strengths and weaknesses, etc. It’s not just semantics either as Squeak and Smalltalk are two different things.

              3. 5

                Well, Ill miss having you here. Thanks for the things you taught me about Smalltalk.

          2. 6

            k/q don’t have (f g h) and have fewer “words” so it’s easy to take a look and see if it’s more discoverable.

            k)mode:*>#:'=:
            q)mode:{first idesc count each group x}
            

            but having 200 primitives shouldn’t be a problem, it’s just a lot to (re) learn. 眾數!

            Anyway.

            If I need the mode, why wouldn’t I just write?

            y{~{.\:+/y=/y
            

            That seems perfectly clear to me, and doesn’t require fiddling with “expert level” at all.

            I think stuff like this:

            (i. >./) & (#/.~) { ~.
            

            is one of those things where if you needed to compute a trillion modes a day, you might try to think about how you reduce the time and space of the operation and so it’s worth spending forty minutes on it.

            btw, this is faster: ~. {~ [: (i. >./) #/.~

            1. 6

              As a data point: While reading the blog post, I tried writing it in k, and came up with exactly the same answer you did (albeit with slight syntactic changes for kona). It only took a few minutes.

              1. 2

                It doesn’t really make sense to compare APLs to logographic languages. Natural languages are optimized for single-pass parsing.

                A more apt comparison (and the most popular one, I believe) might be between the notation of APL and mathematical notation. But even then most mathematics can be read without the insane backtracking and potential for error that APL presents.

                1. 2

                  I don’t agree that “natural” languages are optimised for anything: “L’invité qu’il a dit des folies”, or “The horse raced past the barn fell“ or even “为了环保运动必须深化” all demonstrate how sometimes a lack of clarity early in the sentence, accidentally or not, can force the reader to backtrack in a “natural” language.

                  I understand APL this way: from left to right, top to bottom. If you understand it a different way it may be helpful to explain to people how you are (or came to be) successful in APL instead of trying to mix up my metaphors.

                  1. 2

                    Certainly you can construct natural language sentences that require backtracking. But my statements about the features of natural languages are uncontroversial (ask any linguist) and backed up by significant evidence. In fact, you can easily observe this yourself through the following experiment: take an program in an APL and a sentence in Chinese or even English and see how much someone can understand when you black out a few characters/words.

                    So I’m a bit surprised as to why you seem to be arguing whether natural languages are like that instead of whether APL is like that.

                    Comparing APLs to Chinese or other logographic languages is disingenuous because it suggests that Chinese, a language used by billions for thousands of years, shares the flaws of the APLs, a niche language used by at most hundreds for a few decades; it attempts to link a dislike of the design of the APLs with a dislike of logographic languages in general. As I’ve said, this comparison is superficial, and it is much easier to understand the flaws and benefits of APLs through a less synthetic comparison, e.g. with mathematical notation or other programming languages. It’s folly to “mix up the metaphors” of natural language and computer language.

                  2. 1

                    based off of this, would it make sense to make an APL variant where you still type the syntax like in ASCII variants, but you apply a rich formatting environment to it? Do you think there would be possibly better layout techniques for reading the code?

                2. 4

                  This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

                  It’s just like learning a human language: Until you use the vocabulary enough to get comfortable, you are going to feel lost, and spend a lot of time getting friendly with a dictionary.

                  1. 8

                    This is a problem with any language or library. You need to know what is available in the Python library and what it does to use it effectively. You need to know the binaries in /bin to use the shell effectively. And so on.

                    I think this probably misses the point. The Python solution was able to compose a couple of very general, elementary problem solving mechanisms (iteration, comparison), which python has a very limited vocabulary of (there’s maybe a half dozen control constructs, total?), to quickly arrive at a solution (albeit while a limited, non-parallel solution, one that’s intuitive and perhaps 8 times out of 10 does the job). The standard library might offer an implementation already, but you could get a working solution without crawling through the docs (and you could probably guess the name anyways).

                    J required, owing to its overwhelming emphasis on efficient whole-array transformation, selection from a much much much larger and far more specialized set of often esoteric constructs/transformations, all of which have unguessable symbolic representations. The documentation offers little to aid this search, complicating a task that was already quite a bit less intuitive than Python’s naive iteration approach.

                    1. 5

                      For a long time now, I’ve felt that the APL languages were going to have a renaissance. Our problems aren’t getting any simpler, so raising the level of abstraction is the way forward.

                      The emphasis is on whole array transformation seems like a hindrance but imagine for a second that RAM becomes so cheap that you simply load all of your enterprise’s data in memory on a single machine. How many terrabytes is that? Whole array looks very practical then.

                      For what it’s worth, there is a scheme to the symbols in J. You can read meaning into their shapes. Look at grade-up and grade-down. They are like little pictures.

                      J annoys me with its fork and hook forms. That goes past the realm of readability for me. Q is better, It uses words.

                      What I’d like to see is the entire operator set of, say, J brought into mainstream languages as a library. Rich operations raising the level of abstraction are likely more important than syntax.

                      1. 5

                        J annoys me with its fork and hook forms. That goes past the realm of readability for me.

                        In textual form I agree. The IDE has a nice graphical visualization of the dataflow that I find useful in ‘reading’ that kind of composition in J code though. I’ve been tempted to experiment with writing J in a full-on visual dataflow style (a la Max/MSP) instead of only visualizing it that way.

                        1. 3

                          I find it a lot easier to write J code by hand before copying it to a computer. It’s easier to map out the data flow when you can lay it out in 2D.

                          1. 1

                            Have you looked at Q yet?

                          2. 1

                            That would be a very useful comparison of the usability of a compact text syntax vs visual language. I imagine that discoverability is better with a visual language as by definition it is interactive.

                          3. 2

                            I started implementing the operators in Scala once - the syntax is flexible enough that you can actually get pretty close, and a lot of them are the kind of high level operations that either already exist in Scala are pretty easy to implement. But having them be just a library makes the problem described in the article much much worse - you virtually have to memorize all the operators to get anything done, which is bad enough when they’re part of the language, but much worse when they’re a library that not all code uses and that you don’t use often enough to really get them into your head.

                            1. 1

                              It could just be a beginning stage problem.

                              1. 1

                                It could, but the scala community has actually drawn back from incremental steps in that direction, I think rightly - e.g. scalaz 7 removed many symbolic operators in favour of functions with textual names. Maybe there’s some magic threshold that would make the symbols ok once we passed it, but I can only spend so much time exploring the possibility without seeing an improvement.

                                1. 1

                                  Oh. I’d definitely go with word names. Q over J. To me, the operations are the important bit.

                        2. 5

                          The difference is that Python already is organized by the standard library, and has cookbooks, and doesn’t involve holistically thinking of the entire transformation at once. So it intrinsically has the problem to a lesser degree than APLs do and also has taken steps to fix it, too.

                          1. 2

                            How easy is it to refactor APL or J code? The reason I ask is that I have the same problem with the ramdajs library in JavaScript, which my team uses as a foundational library. It has around 150 functions, I don’t remember what they all do and I certainly don’t remember what they’re all called, so I often write things in a combination of imperative JS and ramda then look for parts to refactor. I’m interested to hear whether that’s possible with APL, or whether you have to know APL before you can write APL.

                        3. 3

                          “The :; dyad, which partitions y based on a specified Mealy machine.” - you got to give it props.

                          1. 2

                            It sounds like the problem is that the documentation doesn’t properly organize the functions. You should never have to search through EVERY tool if the documentation is written correctly.

                            1. 2

                              This captures pretty well what my problem with J has been too. I wonder to what extent it’s largely just the size of community though. Judging by the revision history, the NuVoc reference pages were almost entirely written by 4 people. Given a few more, you could imagine a more extensive set of resources on the wiki or elsewhere. It also lacks as a result the most common way people pick up useful code snippets and partial solutions in other languages, an extensive set of StackOverflow answers.

                              1. 3

                                I wonder to what extent it’s largely just the size of community though. Judging by the revision history, the NuVoc reference pages were almost entirely written by 4 people.

                                I think that summarizes the problem quite well: the language itself is great, but the support network isn’t yet there. It’s compounded by the fact you really need to know some idioms to build complex sentences with, a problem most other languages don’t have. I might try to whip up some verb classifications.

                                1. 4

                                  This honestly is reminding me of people coming to FORTH. We’ve tried to improve the situation in Factor by having namespaces and examples, but I’m still used to those new to the language writing amazingly unidiomatic code because they didn’t know that a word existed in some vocabulary somewhere. I wonder if there’s a generic solution to this type of problem with “weird” languages.

                                  1. 4

                                    Cookbooks are great help for this

                                    Lots of problems end up needing to be solved. Sure, there’s stuff like finding the mean. But I’m thinking maybe one level up. Parsing out tuples from a string in a file efficiently. Having a basic state machine simulating… Bank accounts maybe? Making some circles on a screen to visualize some data

                                    Some tools are for different domains than others but most of us are aiming to ship things for “real people” so having proper end to end examples can show new people good ways to organize their code.

                                    Purescript by Example does this pretty well, with “real” motivating examples the entire time, and showing how to take advantage of Purescripts goodness even with JavaScript being used for heavy lifting.

                                    I learned so much watching other people code JQuery stuff too. Real examples. It’ll show larger architecture patterns that type signatures can rarely capture

                              2. 1

                                This kind of arguments against APL family always feels odd. You can say APL lacks modern FFI. You can say APL lacks variable scopes. You can say APL’s workspace feels arcane. But the symbols/functions/operators?

                                If you native tongue is English, how long do you expect to spend to write a sentence in Farsi by using a dictionary only?

                                1. 5

                                  I think this argument is limited to J: k/q and other APLs like Dyalog don’t have this problem and the author admits they don’t know those languages. At the end of the day the author is calling for more/better documentation, and I don’t think either of those things would make J worse.

                                  See, most people learn to write by reading lots of things.

                                  Programmers however, learn to program by programming lots of things.

                                  The idea is that after 5-10 years of programming a certain way, your brain changes enough so you start thinking that way.

                                  To that end, it makes sense to try to find ways to make it easier for newbies to learn how to program lots of things.

                                  Now.

                                  It’s difficult to look up words in a Chinese dictionary.

                                  Most Chinese “words” are made up of a few radicals.

                                  If you can identify them, you can look up the number of strokes in the radical on a table to get an index. You combine all of the indexes together to find what page to look up.

                                  If you can’t identify them, you can try looking it up by stroke count. There are lots of “words” with twelve strokes, so this can take a lot of time.

                                  It might not be possible to improve on either of these (there’s also a corners-method, but it’s frustrating to learn) which puts Chinese at a disadvantage in recruitment.

                                  Showing people things they can do and say in J that is so much shorter and faster and with fewer bugs than other languages works only so far as other programmers believe those things are important, and I don’t think most programmers believe those things are important enough to be worth an extra 5-10 years of learning J or Chinese.

                                  1. 1

                                    Saw this in another comment, but it doesn’t really make sense to compare J to Chinese. Natural languages are optimized for single-pass parsing and have lots of redundancy for error correction. Consider how much worse your understanding becomes when you don’t know / misread one logogram vs. even one character in a J program.

                                    1. 1

                                      It’s about the same as when I read Chinese. Some of the reasons are in my other comment.

                                      I found this thread of people talking in music absolutely fascinating.

                                  2. 4

                                    I can’t afford to completely stop being productive at work, and I don’t want to spend personal time learning a new language. The kind of code I write today (in Scala) would look completely alien to the me who first picked up Scala 8 years ago, but, crucially, I’ve been able to reach that point incrementally, step by step, spending work time only and remaining productive all the time - even 8 years ago I was able to be as productive in Scala as I had been in Java, and close to as productive as I had been in Python. (And this is why I learnt Scala rather than Haskell - even if the kind of code I’m now writing would be easier in Haskell, I had no way to get there)

                                    Maybe it’s not possible to make a language that has the advantages of APL and the gradual on-ramp of Scala, but if so I can’t see any way most programmers are ever going to adopt such a language, however good the end state may be.

                                    1. 4

                                      Shops that use less mainstream languages tend to have language courses as part of the onboarding. Advanced places using mainstream languages also tend to have this, extreme’s being Google. I think of using some language is truly as valuable as the company claims it is they are likely to support training during work hours.

                                      1. 1

                                        Hmm. The place I currently work actually has (or at least had at some point) a significant APL contingent, but I’ve never seen a course offered. Will keep my eyes open.

                                      2. 1

                                        I think the Jane St position on learning a new language is a nice counterpoint with Ocaml being the language. One point they made (IIRC) is people who could pick it up in onboarding were likely better programmers than those only knowing say Java. Supporting your side are all the languages that are successful because they built on the syntax, basic capabilities, or runtimes of other ones. Scala is a great example of the latter for Java. Go might be another since it’s very approachable for developers of imperative languages.

                                        So, if it’s APL, might either make the onboarding easy if going all in, make it a framework/library for an existing language that’s very popular, or DSL for a language supporting macros. I find the latter most interesting with Julia being leading candidate since it already targets numerical sub-field with seemless integration of C and Python code they already use. I’d default on doubting APL would have much productivity over APL DSL in Julia since developers will spend more time thinking and operating on data than writing the code itself. Julia’s code would also be short being dynamic.

                                        That’s just my speculations on this. I’d only learn APL to learn the mindset and some useful operators for a DSL or library. We shouldn’t need to go all in with it given current state of programming tooling.