1. 25
  1. 5

    Spoiler alert: the obligatory Futurama joke doesn’t occur until the very last slide. The tension was unbearable.

    I was able to follow part of this, but when it got past the first projection, the slides alone didn’t have enough detail to be comprehensible. I wonder if there’s video or a full transcript someplace [but I am too lazy to go hunting]

    1. 3

      Interesting slides, thanks @amirouche! I did my master thesis on an Erlang supercompiler. It can theoretically do the Futamura projections, so I’m a bit familiar with them but I had no idea they were used in practice in GraalVM. Very cool.

      I want to mention a related cool thing, which is checking properties of programs. If you have the procedure f and you want to find under which conditions it returns an even number, then you can partially evaluate (lambda (n) (even? (f n)). When doing this the program essentially runs with an abstract n argument rather than a concrete number, and the code after partial evaluation will be a new program that tells you under which conditions an even number is returned. Basically f will disappear and only those parts of it that were relevant to answering even? should remain. You’d need something like a supercompiler to get the full effect.

      1. 2

        Thanks for the pointers.

        I am going through pebook [0] and plan to apply what I learn on Kernel.

        [0] http://www.itu.dk/people/sestoft/pebook/

      2. 2

        I’ve always found this concept very interesting, but I am not sure about the practicality of it.

        1. 3

          While I haven’t dug that far into it, I believe it’s the basis of the very successful (though unfortunately proprietary) implementation of GraalVM, which provides significant speedups for most JVM languages.

          1. 3

            Yup, also PyPy! which people use in production. I think PyPy was the first use of Futamura projections “in production” – I’d be interested in any arguments against that. [1]

            And Babashka the Clojure scripting tool uses Graal / Truffle, lots of others here: https://www.graalvm.org/use-cases/

            I think the main downside is that it means you have compilers in your binary, which makes it like 50+ MB


            For Oil I found these ideas very appealing / interesting, but instead we’re just doing a straightforward C++ translation, with static typing (e.g. JVM bytecode is dynamically typed, even though Java is statically typed!)

            So we use the C++ compiler, and the binary ends up more like 30x faster, not 3x faster. And it’s 1.2 MB, having no compiler, and much more portable to different machines, and (way) faster to compile. (Also we couldn’t have used it without a metalanguage change, i.e. it would be RPython and not Python)

            Although I’m sure there are differences in how say Babashka is built and how PyPy is that I probably don’t understand

            [1] edit: I’ll have to read this again, I’m probably mixing a few things up - https://morepypy.blogspot.com/2018/09/the-first-15-years-of-pypy.html

            Partial evaluation is an old idea in computer science. It’s supposed to be a way to automatically turn interpreters for a language into a compiler for that same language. Since PyPy was trying to generate a JIT compiler, which is in any case necessary to get good performance for a dynamic language like Python, the partial evaluation was going to happen at runtime.

            1. 3

              And Babashka the Clojure scripting tool uses Graal / Truffle, lots of others here:

              I think Babashka explicitly doesn’t use the Truffle framework, but it does use GraalVM.

              Combining AOT and Interpretation

              It could be interesting to explore Clojure on Truffle or a Clojure run inside Java on Truffle, but the combination of pre-compiled libraries and code running inside a Truffle context while having good startup time poses its own challenges. A benefit of how SCI is currently set up is that it’s easy to combine pre-compiled and interpreted code. As SCI is implemented in Clojure it was also easy to support ClojureScript, which is the dialect of Clojure that compiles to JavaScript. SCI on JavaScript enabled writing a Node.js version of babashka, called nbb and a browser version called scittle.

              https://medium.com/graalvm/babashka-how-graalvm-helped-create-a-fast-starting-scripting-environment-for-clojure-b0fcc38b0746

              1. 1

                So, with PyPy, can I feed it an interpreter to get a compiler back? Where do I read more about how to do that in Pypy?

                1. 3

                  I remember I tried this many years ago. Not sure it will run now, but the idea is, write BF interpreter in RPython, and get a faster interpreter:

                  https://morepypy.blogspot.com/2011/04/tutorial-writing-interpreter-with-pypy.html

                  Then add some kind of main() harness to get a JIT compiler

                  https://morepypy.blogspot.com/2011/04/tutorial-part-2-adding-jit.html

                  It takes a long time to translate your interpreter, and you get a big mandelbrot fractal while you wait !

                  I guess Graal could give you an AOT compiler from an interpreter (?), but I’m not sure if RPython has that

                  I think both of them are very difficult if you’re not familiar with the codebase, it’s not really an “end user” thing


                  They have done (a very complete) Python, Prolog, I think C extensions to some degree, a bunch of others, and even bridging languages like Graal does:

                  https://doc.pypy.org/en/latest/extradoc.html

                  FWIW I found that a nice benefit of generating interpreters is that you can “parameterize” it by the GC algorithm. GC and ref counting policies are both littered ALL OVER most interpreters and VMs. If you have a layer of indirection (a code generator), then you can test out multiple GC algorithms easily. Otherwise you are kind of stuck rewriting huge amounts of code. Oil is taking advantage of that.

                  1. 1

                    Also I just Googled and there are similar things for Graal:

                    https://github.com/chumer/bf/tree/master/src/main/java/com/oracle/truffle/bf

                    https://www.graalvm.org/22.3/graalvm-as-a-platform/language-implementation-framework/Languages/

                    I don’t know if this code works/runs, and actually generates say a JIT or AOT BF compiler to x86, but I think it’s supposed to. (And I’d be interested if anyone knows or makes it work)


                    Somewhat related line of work I looked at a few years ago: https://www.cs.purdue.edu/homes/rompf/papers/amin-popl18.pdf

                    Again I think there are significant engineering downsides to using these extremely mathematical techniques, but it’s very fun and mind expanding

                  2. 2

                    The thing you’re looking for is RPython. There used to be example versions of RPython implementations of scheme, ruby, and a few other languages. I’m not sure that they still work, nor do I know if RPython exists at all as its own thing, or if it’s more fully integrated into a “PyPy is only Python” type effort.

                    1. 3

                      The Ruby implementation, Topaz, is still alive. I don’t personally use Ruby often enough to know whether Topaz is still compatible with typical libraries, though.

                      I used RPython for Monte and Cammy. My Monte-in-RPython code is available, but probably not enlightening. My Cammy-in-RPython core looks a lot like the toy interpreters displayed in the RPython tutorials linked by @andyc, and is almost certainly more readable.

                      In general, RPython has to be obtained from PyPy tarballs. In that sense, yes, RPython is tied deeply to PyPy. However, the rpython/ and py/ source code can be used without pypy/. For example, this Nix fragment pins a specific version of a PyPy tarball, and later in the same file we see that rpython/ is copied out of the tarball to be used on its own.

                  3. 1

                    Replying to myself for education – it does seem like partial evaluation / Futamura projections have problems in practice that lots of people hit the hard way, and that I didn’t quite understand. They are very elegant in theory but they say nothing about “quantities” or performance. That is, you could get your JIT or AOT compiler for free, but then you could have a huge about of optimization to do (like years or decades).

                    Some good comments on HN: https://news.ycombinator.com/item?id=17946026

                    I also read this last night and understood it better than the first time:

                    https://www.scattered-thoughts.net/writing/staged-interpreters-in-rust/

                    In the end the whole thing was nixed by the fact that the staged interpreter had already become way more complex than the compiler I had written previously and that the improvements in compile time were more than lost by the slower run time.