1. 23

  2. 7

    Call me an optimist, but I’m hopeful that this can do to progressive C++ shops what Clojure did to progressive Java shops. Lots of opportunities for a Lisp to be useful there, and C++ programmers won’t be philosophically opposed to using a huge/complex language.

    1. 2

      A high performing Lisp that isn’t on the JVM and isn’t old as hell? /me drools

      1. 2

        A faster Clasp compiler is coming soon – the current Clasp compiler generates slow native code that is about 100x slower than highly tuned Common Lisp compilers like Steel Bank Common Lisp.

        Let’s hope that a higher performance version will indeed be coming soon :-)

        1. 1

          The project direction looks like it will be high performing eventually, as opposed to something like Racket which doesn’t have any plans to be natively compiled and isn’t terribly concerned with being blazing fast. Building on LLVM is a big hint that Clasp is going the opposite direction. That being said, I love Racket! <3

          Also, SBCL was what I was thinking of when I said old as hell. ;)

          1. 2

            What about ABCL and RoboVM?

            1. 1

              I don’t know CL or even ABCL but I made a little progress. I followed this blog post on standalone ABCL jars.

              But changed

              // Load.loadSystemFile
                              //("greet.lisp", false, false, false);

              Compile with

              robovm -forcelinkclasses org.armedbear.lisp.* -verbose -jar greet.jar

              gives me a stack trace I dont understand. The jar does run for me, but still references greet.lisp from the local filesystem, isn’t pulling it as a resource.


              It looks like ABCL calls

              186:        return defineClass(name, b, off, len);

              which RoboVM doesn’t like


              Rather than load a .lisp file, one could precompile to .cls

              (zt.env)volcano:greet animatronic$ abcl
              Armed Bear Common Lisp 1.3.1
              Java 1.7.0_67 Oracle Corporation
              Java HotSpot(TM) 64-Bit Server VM
              Low-level initialization completed in 0.402 seconds.
              Startup completed in 1.866 seconds.
              Type ":help" for a list of available commands.
              CL-USER(1): :cf greet.lisp
              ; Compiling /Users/animatronic/w/robovm/tests/greet/greet.lisp ...
              ; (DEFUN MAIN ...)
              ; Wrote /Users/animatronic/w/robovm/tests/greet/greet.abcl (0.212 seconds)
              CL-USER(2): ^D
              (zt.env)volcano:greet animatronic$ jar tvf greet.abcl 
                 315 Fri Sep 26 16:06:38 PDT 2014 __loader__._
                1744 Fri Sep 26 16:06:38 PDT 2014 greet_1.cls
                3464 Fri Sep 26 16:06:38 PDT 2014 greet_2.cls
              (zt.env)volcano:greet animatronic$ 
      2. [Comment removed by author]

        1. 14

          Hopefully comments like yours don’t become the norm here.

          He’s aware that he needs to do more analysis at the higher levels in his compiler before generating the LLVM IR. Do you have something new to add or that points to a specific issue that he’s going to have that is insurmountable?

          LLVM is improving constantly. Look at all of the work by Azul to improve the GC support for example, or the work by Apple to add patch points for doing things like PICs.

          We have a prototype LLVM-based backend for Open Dylan and it is doing pretty well so far. (We perform a lot of Dylan-level optimizations prior to generating LLVM IR, much like SBCL does for Common Lisp.)

          Edit: And apart from the actual and factual issues with your comment, it was pretty rude and disrespectful of his months / years of work on this project and the actual results that he’s getting.

          1. 6

            I’d further note that without projects like this, the JIT is unlikely to improve.

          2. 7

            LLVM has also become a dramatically better foundation on which to build a JIT since that comment was written 3 years ago, and it’s only getting better all the time.