1. 19

Hi Lobsters,

As context for my question; DSLs, “Little Language” and such get considerable attention these days. Usually these are languages for a purpose that you expect to use repeatedly in single domain. You create parser, turn the AST into actual semantics and etc. The resulting interpreter usually doesn’t have to be fast, what matters is you have a language you can reuse.

What about a different use case. You have a small language that’s so limited you know exactly one program will ever be written in it and you’ve written it (obviously, the program would be a least medium sized but you’d know everything about). However, you want that particular program to run quickly. Obviously, you could use all the tricks that a full compiler write could use to optimize the process. However, having you complete knowledge of what’s going to be compiled seems like it could help you.

So I’m curious if anyone has tricks specific to this case or anyone has links to writing a compiler for a single program with the purpose of running that program as fast as possible?

  1. 18

    I maintained the Wasabi compiler for Fog Creek Software, which was only used to compile FogBugz (and FogBugz-adjacent projects). The purpose was not the same as yours, though.

    1. 12

      Similarly, Facebook developed HipHop just to compile their “one” PHP application.

      1. 4

        What did you guys inside Fogcreek think of all the vitirol Wasabi got online? I recall reading a lot of threads about on hn and elsewhere that decried the entire excercise as being misguided at best, for example.

        1. 10

          Like many things, I think most of the outrage came from people who don’t read good, and the rest from people who think because they are reading about a decision today that means it was decided today. Contrary to popular belief, Fogcreek didn’t decide one day to write a bug tracker, and then put “write custom compiler” at the top of the todo.

          I think the takeaway was never talk about solving a problem other people may not have.

          1. 4

            I like Joel’s response best:

            What’s more interesting about the reaction is that so many average programmers seem to think that writing a compiler is intrinsically “hard,” or, as Jeff wrote in 2006, “Writing your own language is absolutely beyond the pale.” To me this sounds like the typical cook at Olive Garden who cannot possibly believe that any chef in a restaurant could ever do something other than opening a bag from the freezer and dumping the contents into the deep fryer. Cooking? With ingredients? From scratch! Absolutely beyond the pale! Compilers aren’t some strange magic black art that only godlike programmers are allowed to practice. They’re something you learn how to create in one semester in college and you can knock out a simple one in a few weeks. Whatever you think about whether it’s the right thing or not, if you think that a compiler is too hard to do as a part of your day job, you may want to invest some time in professional development and learning more about your craft.

            As someone who took one semester of compilers in college, and ended up maintaining this compiler for several years, I agree. People create new web frameworks all the time. People create their own DSLs and ORMs. There’s nothing harder or weirder about compilers than making tools at these other layers of the stack, but for some reason “compiler” shuts off the part of some people’s brains that lets them think “oh, this is just a program, it takes input and creates output and is 100% understandable.”

            (I have this same belief-bug, but mine’s around encryption and security.)

            1. 2

              I think a lot of people assume you have to have all the optimizations, too. The work that goes into compilers and their optimizations are often mentioned together. In many applications, one doesn’t need all those optimizations. They’re worried about a problem they wouldn’t have.

              1. 2

                Yep! Wasabi targeted .NET, where even for C#, most of the optimization’s actually in the CLR JITter, rather than in the ahead-of-time compilation phase. We chose to write a transpiler for Wasabi 3 rather than generating bytecode directly, but even if we had done the latter, we would still certainly have done almost no optimizations ourselves. (It also helped that our previous target runtime, ASP VBScript, is notoriously slow, so switching to .NET and doing zero other optimizations was still an enormous performance win.)

          2. 3

            Googling gives a dead link to a blog about this. Does these blog entries live anywhere now?

            Is it this? http://jacob.jkrall.net/wasabi-the-parts/index.html

            1. 1

              I have mirrored my two Wasabi posts onto my personal site:

              https://jacob.jkrall.net/killing-off-wasabi-part-1

              https://jacob.jkrall.net/killing-off-wasabi-part-2

              Also, my Hanselminutes episode is at https://hanselminutes.com/493/killing-off-wasabi-a-20yr-old-vbscript-problem-solved-with-2015-roslyn-tech, if you want to listen to me blather about it for 30 minutes.

              1. 1

                blog.fogcreek.com is currently undergoing some sort of maintenance due to the recent acquisition of Manuscript/FogBugz/Kiln. I’ll see if I can repost the Wasabi articles on jacob.jkrall.net.

                “Wasabi: The ??? Parts” is, basically, the documentation for Wasabi. It was not written by me, but I re-hosted it publicly for the people on HN who asked for it.

            2. 13
              1. 11

                Oil is compiled with a bytecode compiler (and front end) written in Python. It’s the only program that uses this specific compiler. (I didn’t write it, but I cobbled it together and heavily refactored it, with plans to add many more features.)

                I’ve written at length about it [1] – in short the plan is to evolve Oil into a more efficient program without doing a big-bang rewrite in C or C++. I have to implement at least 2 programming languages for the Oil project – OSH and Oil, so using a high-level language helps.

                I think of Oil as being “metaprogrammed”, not programmed. There are custom compilers for DSLs like re2c and ASDL, and also some small Python code generators for things like enums, online documentation, etc.

                The latest release only has 9K significant lines of code in the core! [2] Compared to 110K significant lines of C code for bash (as measured by cloc, bash 4.4).


                The way I’ve been explaining thing to people is with an analogy TeX. TeX is actually written in an abstract subset of Pascal! The versions on your machine are not compiled using a Pascal compiler. They are translated to C and then compiled with a C compiler!

                https://news.ycombinator.com/item?id=16526151

                [1] Building Oil with the OPy Bytecode Compiler

                [2] http://www.oilshell.org/release/0.6.pre3/metrics.wwz/line-counts/oil-osh-cloc.txt

                1. 3

                  TeX is actually written in an abstract subset of Pascal! The versions on your machine are not compiled using a Pascal compiler. They are translated to C and then compiled with a C compiler!

                  Oh, wow, thanks for writing this! I knew about WEB, but never looked at it, or where it came from. Definitely learned something today! Thanks!

                2. 10

                  I know the Excel team maintain a C(++?) compiler that they use internally for Excel, I don’t really know if it’s used elsewhere

                  1. 7

                    I have a very vague memory of reading a blog post from someone at Microsoft (I think Raymond Chen) years ago which mentioned that they had a compiler whose entire purpose was to build exactly one DLL that shipped with Windows. I think its purpose was something along the lines of bridging 16- and 32-bit ABIs for backwards compatibility with Windows 3 programs, so it needed to have a mixture of both 32-bit and 16-bit code in it, and apparently that’s not something that any sensible compiler’s code generator will ever do for you, so someone made a one-off for it?

                  2. 6

                    This is partly what a recent paper & submission to lobste.rs, “Collapsing Towers of Interpreters”, Amin, Rompf, is about. Of course this is recent research so applying it will probably be an exercise.

                    This is overkill for your use case as you’re not talking about many languages, just your one. Can you compile your program to C/C++/Go or whatever you’re most comfortable with rather than interpreting it?

                    1. 2

                      That paper looks like it would be very useful whether I implement it directly or not.

                      My ultimate target now is simple bytecode for a unique platform. Compiling to C would mean creating a C-to-bytecode compiler too.

                      1. 3

                        You could perhaps implement a backend for LLVM or some other C compiler emitting to your custom platform. Then you’ll get the LLVM optimizations. I don’t know what your time/money budget is :)

                        1. 4

                          Yeah no,

                          The paper you originally linked to actually is what I need and not overkill. It happens I am dealing with the “interpreter tower” that the paper references.

                      2. 1

                        That feel when people don’t have a ton of background on the project, but want to tell you that things don’t fit your use case anyway 🙄

                        It’s like “rtfm”, but surprisingly even less valuable.

                      3. 5

                        Yes, I created a custom compiler to assemble programs that can call malicious Lua bytecode. That talk is old, I’ve got a lot more to present in the next version. :-)

                        Though, to your question - it wasn’t for running it as fast as possible, although it was for bypassing the usual guarantees the Lua compiler provides to implement dlopen in pure Lua for a side project.

                        1. 4

                          I just completed the SICP course with David Beazley in Chicago, as described here:

                          http://amontalenti.com/2018/08/26/sicp-expanding

                          In that course, you write two Scheme interpreters: one in Python, and one in Scheme itself. Throughout the class you use the Racket programming environment for working with Scheme.

                          Racket seems to have become, over time, a kind of DSL building toolkit. And the book SICP takes the attitude that programmers should be language designers, making themselves DSLs as they go.

                          A prior course I took with Beazley was about writing compilers using the more typical lexer/parser/generator framework. That course was called “Write a compiler (in Python)”. For it, you use Beazley’s PLY framework for writing parsers in the lex/yacc model atop Python. There are also some modern takes on this like pyparsing and rply.

                          In general, writing DSLs is hard. Languages like Scheme and Clojure make it easier, thanks to Lisp syntax and macros. And Python is a great language for writing mini languages too, as long as your focus is on “external” DSLs (that is, you don’t aim to use Python itself as an escape hatch). For a crazy unity of all of these ideas, you can take a peek at the HyLang project, as well – this is a Clojure like Lisp that runs directly on CPython with two-way import support.

                          1. 4

                            LPEG, the Lua pattern-matching tool based on PEGs, translates patterns into programs that are interpreted by a parsing machine. Here’s the paper where they go into details about it (section 4): http://www.inf.puc-rio.br/~roberto/docs/peg.pdf

                            1. 3

                              I use LPeg a lot. One example, I created an LPeg expression that generates another LPeg expression to parse dates based upon the format string used by strfime(). (Code).

                            2. 4

                              I’ve done this for a smart contract because the existing compilers didn’t generate good code and that would directly lead to higher user costs forever.

                              1. 3

                                I did not create the language, but INRAC was used for, as far as I can determine, two programs, only one of which I’ve been able to obtain (both programs do similar things). The most well known of these two programs is Racter, which supposedly wrote The Policeman’s Beard is Half-Constructed. The language is … um … rather mind blowing [1][2].

                                The only “trick” I can think of is write the program in a language that makes sense for the problem, then write the compiler.

                                [1]

                                [2] The above is the most in-depth investigation into INRAC on the Internet, which is sad because I wrote the above and I’d still like to learn more, but without the actual compiler (or manual) it’s pretty difficult.

                                1. 3

                                  I have many times, mainly for purposes of program synthesis.

                                  I think it’s quite useful, esp. if you’re iterating on a problem, and looking for someway to describe that problem naturally, without having to think about implementation details.

                                  1. 2

                                    Sounds interesting, can you share any more details about this work?

                                    1. 3

                                      absolutely! I work as “red team” (previously in adversary simulation, currently in more technical correctness types of situations), so very often I’m presented with:

                                      1. some set of “things” I need to “do” (API calls native or web, some format I need to construct, some code I need to generate many copies of with minor variance, what-have-you)
                                      2. a system that I’m not supposed to be on with limited tooling (“living off the land”)
                                      3. with a large amount of repetition

                                      so often the easiest way is to simply write something in a simpler format that generates the steps above so that attack chains can be more easily constructed.

                                      A simple example was that I had Remote Code Execution on a host via two languages (one was the platforms scripting language, the other was native Unix/Windows shell), but only 100 characters at a time (as they were packed in images, with no bindings to Python). So, rather than attempt to write a Python binding or fight with make a generic system using the libraries provided, I:

                                      1. wrote a simple description format (creds, commands to be run, host, &c)
                                      2. wrote a compiler to the long horrible chain of things I just described that produced “ok” C
                                      3. delivered that to team + client for proof of concept

                                      it’s a weird example of basically partial evaluation, but it works for me, and is usually easier for me to digest than attempting to get all the moving pieces in one go.

                                  2. 2

                                    I’m sorry but I don’t quite follow. Are you looking for adding effectively another level of compilation just to make it go faster? Why not just write in the host/faster language?

                                    1. 3

                                      Yes, I am talking about adding another level of compilation to make it run faster.

                                      Part of the reason to have a custom language is that language expresses certain things that can’t be expressed easily in the host language and part of it is that the program will be targeting different hardware.

                                      Edit: yeah, it may be kind of weird but I’m mostly wondering if anyone has written about doing something similar.

                                      1. 5

                                        Ah, gotcha.

                                        One example that’s kinda/sorta there is QuakeC.

                                        1. 3

                                          Check out using profile-guided optimization, whole-program optimizing compilers, and superoptimization. Separately or together.

                                      2. 2

                                        If there’s only one language that’s been written then hand traslate it to C and compile with -O2, or disign and ASIC for it. I’m not clear on what OP is getting at exactly.

                                        1. 2

                                          Not really a compiler, but I’ve made my own ‘linker’ that’s optimized for small file sizes. It has too many assumptions (eg. no dynamic linking, no .data section) to be usable for more than one thing, though.

                                          1. 2

                                            There are some tricks in Elixir that might qualify. A few DSLs do a lot of optimizations at compile time. One example is Plug.

                                            1. 1

                                              There’s the 99 language…