1. 1

    On a tangent note: declarative UI in Red language. Docs, 7GUIs benchmark.

    1. 1

      Pity it doesn’t mention Rebol or Red, which, by the virtue of homoiconicity, are their own data format, with literal forms for things like e-mails, IP addresses, URLs, dates and hashtags with @references. Red also features Redbin format to which it can be serialized.

      1. 5

        Nice intro.

        I may not use J in my everyday programming, but I’m glad I learned it and think it made me a better programmer.

        This was my experience too, though I continue to use J almost daily even if not professionally.

        Far more than any other language, including Haskell, learning J felt like dropping acid and blasting away everything I thought I knew about programming.

        It’s the third paradigm, and unfortunately the least well-known:

        1. Procedural
        2. Functional
        3. Array

        It really is a whole new mindset.

        1. 7

          How did you learn it? What kind of projects would you recommend to learn such a language to discover its benefits?

          1. 4

            Personally, I stuck to number-crunching a-la Project Euler and some Advent of Code challenges.

            Also tried to use J for bio-signal processing (a set of student labs built around calculating various metrics of heart-rate variability), but after looking at the solution professor went “Nuh-uh” and I had to switch to R, which IMO wasn’t that bad of a compromise in terms of paradigm shift.


            The most interesting point that I don’t see being addressed in most of the J write-ups is how one can use it to create ad-hoc calculi of various sorts, even problem-specific DSLs. Function composition, monadic/dyadic distinction and tacit programming surely allow for that, yet the “how” of it isn’t widely discussed.

            1. 1

              I had to switch to R, which IMO wasn’t that bad of a compromise in terms of paradigm shift.

              That’s a shame: I think R is basically the same as numpy - no real paradigm shift, just more/different functions, so I hope you’ll give Iverson-ish languages another try, because by focusing on the array-ness you might’ve missed something incredible.

              in most of the J write-ups is how one can use it to create ad-hoc calculi of various sorts, even problem-specific DSLs

              I’m very curious to understand better what you’re looking for here. What’s a kind of problem you’d like to see?

              1. 2

                I think R is basically the same as numpy

                That’s true, and NumPy is basically an ugly cousin of APL ;) What I rather meant is that sacrificing J’s expressiveness was a bearable trade-off in the context of my task, and that in the end I didn’t really need most of it: just reusing R’s libraries and mashing matrices together sufficed. Productivity and acceptance by peers over cleverness and personal satisfaction.

                I’m very curious to understand better what you’re looking for here

                “Notation as a tool of thought” I think. Is J a mean to an end as it is given, or does it permit/encourage building linguistic abstractions on top?

                APL is often praised for its lyiricality, and J specifically describes the parts of its speech as nouns, verbs, adverbs, and conjunctions; but does it end there? In J, There are ways to assign obverses and adverses to a verb, to define its monadic and dyadic versions, to create adverbs with conjunctions; and then some tree-manipulating primitives, even ;: for tokenizing, and boxes that permit manipulating J code as data. Is any of that intended for meta-programming, AST building, and ultimately DSL creation?

                Can an expert in their domain take J and carefully design a set of composable primitives for solving specific problems? If so, what is the process behind that, and how does the end result look like? Is it a notation that reads and writes like poetry, or is it a prosaic library? Is it even a normal practice in APL family?

                So, I guess what I am looking for is a walkthrough thru the points I raised, with some interesting problem as its subject; both about the mindset and its execution. I’m too much of a puny human to grasp Aaron Hsu’s thesis on the compiler that he built with Dyalog APL, but the basic premise is roughly the same with what I have in mind. His live stream on this compiler’s design and architecture is also worth watching.

                1. 3

                  Is J a mean to an end as it is given, or does it permit/encourage building linguistic abstractions on top?

                  Permit? Probably.

                  Encourage? I’ve not seen this personally. In fact, the ethos of J/APL seems the opposite of this. The big promise of the language is that with a single set of ~50 primitives you can elegantly solve the vast majority of practical problems. So there’s no need for bespoke DSLs for your different applications.

                  As a side note, my experience with Ruby has convinced me that such a DSLs are usually a mistake.

                  1. 3

                    A case in point: in the above-mentioned talk from Aaron Hsu, he shows how he defined PEG parser combinators and used them to write parsing expressions in a seemingly declarative manner.

                    Surely that doesn’t qualify as a DSL, but the ability to re-define a problem in terms of these 50+ vector-manipulating primitives is what always fascinated me in APL. There’s an inherent, almost visceral “spatiality” to this act. At times it feels that the more terse your code is, the closer it to some sort of underlying ur-computation that the world is build of.

                    Perhaps my original question is not about how to extend the language towards the problem, but how to ground the problem in the notation. Or perhaps how to design a notation with APL-ish qualities that will influence how I think with it and create in it?


                    APL family posits a certain view on the programming, but how to see everything thru its lens? Is it just linear algebra with mathematical know-how? Practice?

                    An example from my experience: in the first semester of DSP I really struggled with the concept of convolution, I simply couldn’t grasp it on an intuitive level. At least in my native language “convolution” sounds more like “folding” or even “coagulating”, but what folds with what? What is the physical meaning? Implementing it in C-level language only obscured the question behind the semantic noise, by-the-book definitions didn’t help either.

                    And then one day I just [:+//.*/ in J console and it clicked. In a sense, if two signals are threads, then their convolution is a helical braid, the one that you create by coiling one thread over another; a sum +/ of oblique /. diagonals formed by multiplicative * intersections /.

                    There’s a scene in “Matrix: Revolutions” where blinded Neo says to Smith in Bane’s body “I can see you” — that’s the kind of feeling I got from this experience.

                    1. 2

                      APL family posits a certain view on the programming, but how to see everything thru its lens? Is it just linear algebra with mathematical know-how? Practice?

                      Practical answer: doing many problems, getting it “wrong”, and then seeing a more natural way to do it. It seems like you already know this though, based on your very nice example.

                      Larger answer: I’ve been meaning to write about this, but tldr… two words come to mind: geometrical and holistic. Think of the game of life in APL. In J/APL think, you don’t view the problem from the point of view of individual cells. Instead, you look at the whole plane at once, and imagine eight copies of that plane stacked on top of one another – one plane for each directional shift. Then the neighbor count is simply the +/ of those planes.

                      This pattern appears a lot. You take the view from above, look at “everything at once,” and there’s a natural and simple way to express that with array computations. It doesn’t always work, though. Some problems aren’t a good fit for array thinking, but a surprising number are.

                      1. 3

                        Yes, the local action (counting neighbors) is applied on the board as a whole, not just to a specific cell; I even have a J one-liner stashed for that somewhere.† Nice example.

                        Interesting that APL solutions turn out to be fractal in their nature (speaking of ur-computation). Reminds me of Konrad Zuse’s Rechnender Raum and Plankalkül.


                        † Lo and behold:

                        L =: [:+.`*./ ],~ 3 4=/ [:+/^:2 (>{;~i:1)|.]
                        g =: 5 5 $ 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 1 1 0
                        
                           <"_1 ' o' {~ L^:(i.5) g
                        ┌─────┬─────┬─────┬─────┬─────┐
                        │     │     │     │     │     │
                        │  o  │     │     │     │     │
                        │   o │ o o │   o │  o  │   o │
                        │ ooo │  oo │ o o │   oo│    o│
                        │     │  o  │  oo │  oo │  ooo│
                        └─────┴─────┴─────┴─────┴─────┘
                        
                        1. 1

                          Nice. You might also enjoy this 26 byte version:

                          (]=3+4=*)[:+/(>,{;~i:1)&|.
                          

                          By the way, what did you mean by “APL solutions turn out to be fractal in their nature”?

                          1. 2

                            Well, Moore neighborhood is applied to the playing field as a matrix rotation, but at the same time to each cell on the field (each element of a matrix). So, in a way, Game of Life field is a cell by itself (with a number of states dependant on a number of cells in it), and, in turn, each cell on it is a playing board. [1]

                               [ c =: 4=i.3 3
                            0 0 0 NB. A cell with 8 neighbors.
                            0 1 0
                            0 0 0
                               <"_2 (>{;~i:_1)|. c
                            ┌─────┬─────┬─────┐ NB. A cell with 8 neighbors?
                            │1 0 0│0 1 0│0 0 1│
                            │0 0 0│0 0 0│0 0 0│
                            │0 0 0│0 0 0│0 0 0│
                            ├─────┼─────┼─────┤
                            │0 0 0│0 0 0│0 0 0│
                            │1 0 0│0 1 0│0 0 1│
                            │0 0 0│0 0 0│0 0 0│
                            ├─────┼─────┼─────┤
                            │0 0 0│0 0 0│0 0 0│
                            │0 0 0│0 0 0│0 0 0│
                            │1 0 0│0 1 0│0 0 1│
                            └─────┴─────┴─────┘
                            

                            That was a nod towards your idea about geometry and holistic approach.

                            1. 1

                              Gotcha. Thanks.

                    2. 2

                      Yeah. One subtlety most people don’t appreciate about “notation as a tool of thought” is that the paper doesn’t mention “abstraction” once. Easy to miss in these modern times where “notation” has implicitly come to mean “for creating new abstractions”.

                      1. 4

                        That’s fascinating. I never noticed that before.

                        Since I have it open now, relevant paragraph on my previous point:

                        The utility of a language as a tool of thought increases with the range of topics it can treat, but decreases with the amount of vocabulary and the complexity of grammatical rules which the user must keep in mind. Economy of notation is therefore important.

                        Notation as a tool of thought

                        1. 1

                          True, it doesn’t. But then the whole paper is a process of abstraction (deriving general patterns from concrete examples, addressing the main thesis case-by-case) that uses APL as a conceptual vehicle.

                          1. 2

                            The design goal is to create a notation that minimizes the need for new names.

                            1. 3

                              APL is like a beautiful diamond – flawless, beautifully symmetrical. But you can’t add anything to it. If you try to glue on another diamond, you don’t get a bigger diamond. Lisp is like a ball of mud.

                              Extension of the base language is at odds with its core design principle, but that surely does not preclude us from defining problems in terms of it. My original questions (not well-formed TBH) were more concerned with the ways this notation can be adapted to a particular context (or vice versa).

                              E.g. the above-mentioned Co-dfns compiler, as far as I understand, implements a Scheme-like subset of Dyalog APL and represents trees as vectors, but not the other way around (extend the language with tree-like datatype and spruce all of that with s-expressions or something).

                              Like I said, J has a set of primitives and mechanics that to me seem language-oriented, so I wondered how compiler/interpreter creation looks like from this point of view, on a small exemplified scale (hence the DSLs), and what is the problem-solving mindset (heuristics?) behind it.

                              FSM state-transitions can be modeled with tables, yes, and AST, as a graph, can use adjacency matrices with eigen-shticks, but this whole approach feels like a black magic and well-guarded trade secret. Maybe because it’s a fork from the mainstream road and off into the academical weeds. Need to check out more of Aaron Hsu’s talks for sure.

                              1. 1

                                As a historical aside, the APL presented in Iverson’s A Programming Language book (pdf) included trees as a basic data structure (page 45 in the book). They weren’t included in APL as first implemented, then APL2 added nested arrays which serve a similar purpose. Some of the book’s tree operations are in J.

                                1. 2

                                  Yes, thanks for mentioning that! The chapter on microprogramming with APL was mind-blowing at the time I read it.

                2. 2

                  I learned it mainly by doing hundreds of code golf problems, asking questions on the J IRC channel, reading (good chunks of) J for C programmers and Learning J, as well as the J dictionary and the more accessible NuVoc documentation.

                  I’d say it took 1.5-2 years to acquire a feeling of fluency. Much longer than other languages. One crucial practice was solving a problem and then having it edited by experts. Because unavoidably you’ll apply your current paradigms to J, or simply be unaware of J idioms, rather than solving the problem in a “J way.”

                  The best place to get this kind of help these days is the The APL Orchard. There’s a handful of very talented J and APL programmers willing to help.

                  By the way, you could get the same mind-expansion from learning APL, if you were more inclined that way. The free book on the Dyalog APL site is also good.

                3. 1

                  I’d classify it as functional. It just uses a multi-dimensional array where others use a linked list. Nearly all operators are side-effect free and data is immutable. There are higher order patterns like map-reduce.

                  1. 3

                    Array programming languages aren’t “functional” or “immutable” in the way that Erlang and ML are, and I observe beginners notice this only after a few months, probably because these features are used in Array-language applications, and not in the puzzles you practice on to learn how to think in Arrays. Real applications batch and use lots of global variables with delimited names like TCL or Perl; J programmers call them locales; K programmers have the K-tree; APL it’s isolates. And so on.

                    I know other languages have “arrays” but it’s just an unfortunate name collision. Many of us use the term “Iverson languages” instead of “Array languages”, because we’re not trying to confuse things unnecessarily.

                    1. 2

                      To add on the sibling comments, the article linked and code golf expose you to a part of J. For me it go in a very different mindset with stuffs like : full tacit programming reaching more a function-level programming [1] (Even point-free in Haskell fill clumsy in comparison with APL and J), constructs like hooks and forks [2] permitting composition and chaining of functions with ease.

                      1. 1

                        It’s true that tacit programming in J is functional, but that misses the larger difference. The way you need to think about and frame a problem to solve it idiomatically is usually quite different in J versus, say, Haskell.

                    1. 4

                      Ah, brings back the good memories of grokking classical indirect-threaded Forth interpreter, meditating on its Taoist-like interplay with the compiler, and casting the black magic of venerable create with does> for the first time. Thanks for sharing!

                      1. 6

                        A rediscovery of Red’s and Rebol’s VID.

                        view [
                        	text "Enter numbers:"
                        	a: field text "+" b: field
                        	button "Calculate" [result/data: a/data + b/data]
                        	return
                        	text "Result -->" result: text
                        	
                        	button "Quit" [unview]
                        ]
                        
                        1. 1

                          TL;DR Writing code does not look like prose writing

                          Please read it along with this story which says reading code looks just like speech comprehension.

                          1. 1

                            Researchers took an extremely small and uniform sampling group (17 students, roughly the same age and programming expertise) and asked them to find syntax errors. No wonder regions responsible for speech comprehension lighted up — the task was inherently concerned with textual medium and, well, reading, unconcerned with code semantics.

                            This new paper sounds more interesting and reliable (larger sampling group, albeit still picked from the same cohort of students, a variety of stimuli), and correlates with my personal problem-solving experience.

                            Also reminds me of BASE initiative.

                          1. 2

                            Are fMRI studies for these kinds of things reliable now? I was under the impression they aren’t very, so I haven’t really paid attention. (Not trying to be contentious, just my impression after a few rebuttals throughout the years, not least the dead salmon episode).

                            1. 1

                              Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction.

                              This one quickly became a timeless classic during my MS in neurofeedback and functional rehabilitation.

                            1. 3

                              Interesting pitch for Janet. Here is a quick rewrite in Red language, which lives up to the author’s requirements: 10 LOC, native support for all 3 major platforms, can be shipped as a single binary, and all that’s required is a 1MB toolchain.

                              1. 3

                                Red doesn’t look quite like anything I’ve ever seen before, and the links I’ve clicked on the official site so far leave me with more questions than answers.

                                Has anyone used it to make something medium to large sized and released the source code to it? Red looks really interesting and I’d like to see how it holds up beyond “hello, red” before I build something load bearing in it myself.

                                1. 3

                                  For some more background & docs, you can try researching about Rebol, of which Red is a “reboot”. However, given that Rebol was closed-source for most of its history IIUC, I would suspect most codebases are probably closed-source too.

                                  1. 2

                                    In 2000, I used REBOL to create a CMS we used to run on NetBSD.

                                    REBOL has always been among the most impressive languages I’ve ever used. Just the amount of platforms it used to run is unrivalled. Red has the potential to be all of that and more for the current age.

                                    1. 2

                                      Thank you. That was a rainy morning’s worth of very interesting reading.

                                      1. 1

                                        Happy to hear I could give you some fun time :) personally, after experimenting with Red for a short while, I discovered Nim, and I feel for me it hits the “sweet spot” as of now. The main benefit of Nim over Rebol-likes for me is static typing, while its templates & macros still allow for a wide range of DSLs, from lightweight dialects to full-blown standalone sublanguages.

                                        1. 1

                                          FWIW, Red’s approach to typing and its benefits are described here; static typing in Rebol derivatives makes no sense as there are no “variables” to declare, and puts unnecessary restrictions on language’s flexibility. OTOH, Red/System (a low-level C-like dialect of Red) has static typing and Red compiler applies a form of static type checking where possible.

                                          As for DSLs and metaprogramming, the approach taken does not require macros or templates, since all code transformations can be achieved at run-time with the basic language primitives. To give a few examples of its influence: Nim’s collect macro is a conceptual copy of Red’s collect; Janet’s PEG module was heavily inspired by Parse dialect (a meta-DSL for creating other DSLs).

                                    2. 1

                                      Red is nearing 0.6.5 release, so it’s not production-ready and might not match your expectations. The best thing to do IMO is to dip your toes by grabbing the latest nightly build and to ask around in community chat, to see if your specific needs can be addressed.

                                      1. 1

                                        It looks really cool, but probably too early in its cycle for me. I grabbed the nightly and started kicking the tires. I think I need to come back when the GUI bits work on Linux.

                                        I installed wine and tried it out that way. I really like that your puppy GUI looks exactly how I imagined it would from looking at the source. Even in a language that is completely unfamiliar to me. And that it’s a 74k standalone executable once built.

                                        Thanks for pointing this out.

                                        1. 1

                                          FYI, GTK backend development is carried out in a separate branch for now. You can either compile from sources or use an automated build to try it out; also see here for installation instructions on 64-bit distros.

                                    3. 3

                                      Red is amazing. As a former REBOLer, I’m keeping it running here as well. I just wish it had an arm64 version for windows.

                                    1. 7

                                      Unfortunately, the article doesn’t mention the Red language with its native cross-platform GUI engine (Win32, Cocoa, GTK and almost-finished Android) and a declarative DSL for interface specification. Granted, it’s still in the alpha stage, but IMO is mature enough for small and mid-sized personal projects (+ prototyping and aforementioned RAD), and the design goals somewhat resonate with the author’s sentiment.

                                      For an apples-to-apples comparison of GUI frameworks in general, I recommend 7GUIs benchmark; here is the Red version, clocking under ~300LOC for all 7 tasks.

                                      1. 2

                                        Hasn’t Red been in alpha for ages?

                                        1. 2

                                          It has been 8 years in active development as of now, what of it? The amount of features packed into toolchain and standard library easily puts it in the ballpark of other 1.0 languages, and right now it’s on par with Rebol2 (the main reference point).

                                          Tallying version numbers without ever evaluating the project on its technical merits won’t make a fair judgment or constructive criticism.

                                          Consider also sustainability and user experience: when developers say “ready”, they want newcomers to have a truly 1.0, smooth sailing (better docs, larger ecosystem, more diverse community, opportunities to contribute at any level of experience, etc) and to stick with them for a long haul; if they cannot provide that, then they simply don’t set the wrong expectations, at the cost of downplaying themselves (saying “alpha”). In such case tech doesn’t matter, people do.

                                        2. 1

                                          Red is barely alpha, there is almost no documentation, GTK is not supported and toolchain is still 32 bit only (It seems that 64 bit toolchain will be commercial product, part of “Red/Pro”…)

                                          I have included the ones that I thought were “mature” and “popular” enough to be deemed usable.

                                          Seems like Red does not fit the bill.

                                          1. 0

                                            Red is barely alpha

                                            Please tell me how much time did you spend using it and what of the experience gained during that time made you think that it’s “barely alpha” (also, compared to what?).

                                            There is almost no documentation

                                            Presence of official reference documentation, formal specification, Github wiki and community-provided resources doesn’t qualify as “almost no” in my book.

                                            GTK is not supported.

                                            GTK branch is pretty much operable, one can either compile from sources or take an automated build to try it out.

                                            Toolchain is still 32 bit only

                                            True, partially that’s the limitation stemming from using Rebol2 (32-bit only itself) during the bootstrapping phase. Many options on how to tackle planned 64-bit transition are on the table, each with its pros and cons.

                                            It seems that 64 bit toolchain will be commercial product, part of “Red/Pro”…

                                            Red/Pro will be a commercial product and will support 64-bit, so as the community version; one simply takes priority over the other.

                                            Seems like Red does not fit the bill.

                                            That’s non sequitur, since most of your claims were refuted by the actual facts (including 7GUIs benchmarks that I provided). Now, do you have something constructive to add to the topic at hand?

                                        1. 11

                                          I’m hitting this hard right now with GUI programming. The only good cross platform options still seem to be “Use C++ and QT” or “Use C and GTK+”.

                                          Edit: Suggestions welcome…

                                          1. 7

                                            Why not use a binding that does the grunt work of interfacing with C or C++? There are two good Qt bindings for Python (PyQt and PySide). There are good bindings for Gtk+ for many languages. (Though Gtk+ is pretty horrible outside X11/Wayland.)

                                            1. 6

                                              Red language has a native cross-platform (Win32, Cocoa and GTK backends) GUI system with declarative DSL on top of it.

                                            1. 6

                                              I get that the article was posted mostly for tongue-in-cheek title, but Treesheets (for which Lobster serves as a scripting language) is a fine piece of note-taking software, basically a hierarchical spreadsheet / tree editor. Worth checking out.

                                              1. 2

                                                I don’t think that was the point of posting; at least I hope not!. Lobster is an interesting new language, and as a language geek I found this article fascinating.

                                                1. 1

                                                  Yeah I’m interested in lobster but there doesn’t seem to be much information about it out there! The algorithm sounds interesting but honestly I didn’t really get how you would apply it to another language. I was looking at it a few months ago when wrestling with memory management for the Oil interpreter. (I think I can get away with arenas, no RC, but I haven’t gotten there yet.)

                                                2. 1

                                                  Huh - I’d never heard of this before. I was noodling/daydreaming about a better way to store associative data - think somewhere between a Mind Map and a Wiki. This looks like an interesting alternative to that sorta thing.

                                                  Thanks!

                                                1. 1

                                                  I really enjoyed the diagrammatic explanation, esp. with Float Toy and paper at hand.