1. 14
  1.  

  2. 3

    I know next to nothing about Joxa and LFE but here’s a comparison from 2012. (Nor do I know how either language has evolved since then.)

    http://www.ericbmerritt.com/2012/02/21/differences-between-joxa-and-lfe.html

    1. 2

      The latest Joxa docs still link to that article: Joxa documentation – FAQ – “What is the difference between Joxa and LFE (both Lisps for the Erlang VM)”. So it’s likely that Joxa’s advantages over Lisp-Flavored Erlang, at least, haven’t changed since then.

      1. 2

        It’s really too bad that Joxa has basically stalled out; it’s a lot more thoughtful design than LFE, which when I last checked was basically syntactic sugar over the same semantics of Erlang and inherited a lot of cruft (like Lisp-2-ness) that Joxa wiped clean. I believe LFE originally inherited a lot of Erlang’s awkwardness around interactive development (you cannot redefine functions in the shell, you can only bind Lambdas to locals) but that may have been fixed since I last checked.

    2. 1

      I’m sorry to criticize, but why reinvent a new Lisp? Why not build a Common Lisp, Scheme, or Clojure compiler that targets the Erlang VM and get the benefits of an established language along with the benefits of the Erlang VM?

      I realize it’s more work that way, but if I need to target the Erlang VM for some reason and I need to learn a new language anyway, then I might as well just use Erlang.

      I also realize the same thing could have been said about Clojure when it first appeared, but it at least had the benefit of tons and tons of Java libraries, along with tons of JVM deployments everywhere, and tons of Java developers frustrated with Java but tied to the VM. Erlang doesn’t have the library reach of Java, doesn’t have the massive deployment, and AFAICT most Erlang developers still really like Erlang.

      1. 9

        None of those languages have semantics close to what the erlang vm provide. Really if there is going to be a lisp for erlang, by definition it cannot be like those other ones.

        For what its worth, i think LFE is pretty cool.

        1. 3

          “Closeness” may be in the eye of the beholder. And LFE or Joxa are probably suitable to most developers.

          That said, there’s nothing preventing Clojure (minus the JVM specifics) or ClojureScript (minus the Javascript specifics) or Common Lisp or Scheme from running on the Erlang VM.

          Scheme may be the most suitable of CL, CLJ[S], Scheme just because it is the most simple. I’ve got a Scheme implementation for Go that runs a lightweight Scheme interpreter (but with a “compiled” intermediate form, first-class continuations, and tail call optimization) per goroutine. I’ve got every reason to believe that could be ported in short order to the Erlang VM and run one interpreter per process.

          Scheme’s deemphasis of mutability affords the implementation some leeway, i.e. the implementation of mutable variables and data structures does not have to be so efficient assuming application developers would be adopting it primarily for its applicative features and integration with OTP.

          1. 2

            How would special variables map to the Erlang VM were nothing is shared? AFAIK LFE’s autor is a CL fan but he mentioned a couple of things that make it hard to implement CL on the Erlang VM. Don’t know much Erlang so I can’t comment further

            1. 3

              I’m going to begin by stating and implying all kinds of cautions against this being thought through to any useful degree. Restating: I don’t know anything about LFE or the history of implementing Lisps on the Erlang VM. Moreover, I’ve not implemented a typical Lisp in a “nearly purely functional” base language. So take this with a grain of salt but I think it’s a starting point for further evaluation.

              I’m going to base my thoughts here on Scheme + simple dynamic binding and leave full Common Lisp “as an exercise for the reader”. Another significant difference I suspect is that a lot of existing CL code is going to be “mutation-heavy” than existing Scheme code. So penalizing mutation in an Erlang VM implementation of CL is likely to have a much greater cost than of Scheme.

              I’m also going to base this on a “fast interpreter” e.g. compile to an intermediate form (closures, objects, whatever that’s then directly executed as base native code) where environment frames / continuations are heap-allocated. (Moving to a stack allocated and/or natively compiled implementation is also “an exercise for the reader”.)

              The way I typically implement a tail-recursive Lisp in a non-tail-recursive base language is using a trampoline. The arguments up and down the trampoline are effectively the “registers” of the Lisp environment. I would begin thinking about an Erlang VM implementation by associating one shared-nothing Lisp runtime with an Erlang VM process. Because that VM is already tail-call aware, the trampolining of registers becomes more tail-calling the Lisp runtime “loop” of registers.

              One of the registers is the lexical environment. A lambda application then extends the lexical environment with its frame and loops / trampolines to evaluate the body of the lambda with the lexical environment “register” (formal parameter of the loop / result of the trampoline return) The continuation or control stack register for the body remains the same as for the lambda application.

              Assignment statements in the lexical environment can be converted to lambda applications or “assignment-converted” to “boxes” in the worst case.

              (let ((a 1))
                  ...
                  (set! a 2)
                  ...
                  a
                 ...)
              

              becomes in the simple case:

              (let ((a 1))
                  ...
                  (let ((a 2))
                      ...
                      a
                      ...))
              

              in the worst case:

              (let ((a (box 1)))
                  ...
                  (set-box! a 2)
                  ...
                  (get-box a)
                  ...)
              

              Lexical environments become immutable and the price to be paid is for worst-case mutation. In the Erlang VM (which is not purely functional) there are choices to be made for mutable boxes, e.g. the process dictionary or an Erlang VM process per mutable box. Since we’re not too concerned with punishing mutation, choose one and go with it.

              Now for “special” variables. Following this “register-passing” mechanism for an Erlang VM tail-recursive loop (trampoline for non-tail-recursive base languages) the dynamic environment becomes just another “register”. There are two primary ways (plus variations) of implementing dynamic variables: deep binding and shallow binding. Shallow binding would follow much along the lines of the “assignment-conversion” of mutable lexical variables. Each dynamic variable could be an entry in the process dictionary. I’d probably begin by making each special variable an “actor” in its own Erlang VM thread. Pushing a new binding, popping the current binding, and assigning a new value to the current binding are each messages to the actor in its own process loop.

              It might be reasonable to begin each special variable as an immutable binding directly in the dynamic stack register and when an assignment is first encountered to that special variable it is promoted to be an actor.

              This is probably a long-winded way of saying I have not thought about this deeply but assuming one Lisp runtime per Erlang VM process and assuming various forms of mutation are allowed to be somewhat punished in favor of functional programs then there seem to be reasonable options via the process dictionary and/or “mutable variables as actors” in their own Erlang VM processes.

              To round this out, I would assume CL or Scheme programs on the Erlang VM would want to use mostly functional data structures (e.g. FSet) but mutable cons cells, arrays / vectors, etc. present their own problems. Again the purpose of using this runtime should afford somewhat punishing uses of mutable data structures. Mutable cons cells would be the most concerning. Mutable hashes and arrays could take the obvious routes toward the process dictionary, “actor” processes, etc.

        2. 3

          There is/was a Scheme implementation for the Erlang VM fwiw, although it appears to no longer be maintained.

          1. 3

            Common LISP is really heavy, hard to change due to standards, lacks standard library like JVM, and basically no adoption in its past. Scheme is similar except it’s light. Both have a few distributions with a standard library & VM. No adoption remained due to language & ecosystem effects. Clojure dealt with ecosystem effect by targeting JVM. People using it have told me its syntax for stuff is better on Java programmers than older LISP’s.

            So, the reasons to create a new LISP are to do the language better for adoption or get an ecosystem benefit. The first allows for many more LISP’s in the process of experimentation. The latter offers fewer opportunities with a .NET LISP being main opportunity.

            1. 6

              It’s all relative. CL has a good number of libraries. And if you want the JVM + Java libraries there’s Armed Bear Common Lisp.

              Scheme probably has fewer cross-platform libraries. Even so, there’s SISC Scheme for the JVM. And there are well over 100 RFI’s with implementations at https://srfi.schemers.org/final-srfis.html

              That said, I don’t begrudge anyone for defining new Lisp dialects.

              1. [Comment removed by author]

                1. 1

                  Yeah, I know of Kawa but have not used it nor do I know anything about the implementation. SISC has a fairly efficient but tractable implementation. I’d be surprised if it could not be moved to Java 8 with relatively low effort.

              2. 3

                Your first sentence is contradictory. Common Lisp is heavy because it has such a large standard library. IME it’s heavy because it’s thorough. It also has a large, easy to install library ecosystem.

                The standard is difficult to change officially, but in practice it’s not a big problem - macros and libraries are a very flexible way to bypass the standards process. I’m not sure why a compiler targeting a new VM would need to change the standard in any case.

                Adoption isn’t on the Java, Python, or Javascript level, but it’s comparable to Erlang, perhaps even larger. Certainly larger than Joxa.

                The way I see it, creating a new Lisp is the worst option. People who know an existing Lisp have to learn a new language and will be annoyed when it doesn’t work like the Lisp they know. People who don’t know Lisp won’t care anyway because its the first Lisp they’re learning. People who dislike Lisp will still dislike it, and they’ll have another example of how fragmented the Lisp world is.

                Honestly, I don’t care strongly either way. I write a lot of Common Lisp, but I don’t need to target the Erlang VM.

                1. 3

                  The heavy part is the syntax/semantics + libraries with differing styles (esp due to macros). It’s really weird to new developers. The reason appears to be that it was a superset of many competing LISP’s during the minicomputer era. A clean-slate LISP can do a lot better than that with a focused design applying current thinking & for current architectures.

                2. 2

                  For a dotnet lisp, check out IronScheme.