1. 15
  1.  

  2. 6

    I’m not really convinced by his article. He brings up a lot of good points but fails to explain why JITs are so popular if they’re so terrible? He also seems to have conveniently forgot to mention Lua, which if I’m not mistaken is of the fastest (if not the fastest) JIT around.

    1. 8

      Some of the JS JITs are faster (as in the executed program will eventually run faster). One of the great things about luajit, however, is that it has extremely low overhead starting up. And a lot of work went into making the interpreter fast. Probably a great example of what JIT is “supposed to be”. I’ve never liked runtimes that take a long time to “warm up”. Luajit is fast from the start, and then uses the JIT to get to “very fast” as necessary.

      One can try to be “fast as C” but it’s an asymptotic curve. Pushing farther and farther up the curve requires more and more “booster” rockets and the next thing you know you’re the space shuttle with a 24 hour launch procedure. All I wanted to do was fly a kite!

      Footnote: I think the first example of JIT may be Thompson’s regex implementation, that compiled to machine code implementing an NFA.

      1. 2

        I don’t think JIT is terrible, but I think it over-promises and under delivers in a lot of cases.

        Sure, in theory it can beat optimized C, because it knows the details of the machine it’s running on exactly, and knows exactly what code is getting run, what the hot spots are, etc., but few JITs actually do a good job of using that info effectively.

        1. 2

          Case in point: http://jvns.ca/blog/2015/09/10/a-millisecond-isnt-fast-and-how-we-fixed-it/

          Specializing the call to math.pow is exactly the kind of optimization that I’ve been told allows Java to be faster than C. I’ve been hearing this for 15 years now. But… Didn’t happen.

        2. 1

          Yeah, luajit is amazing. It’s only a few times slower than AOT compiled code, and in some restricted cases, it might even get pretty much on par with it.

          He basically covers that here:

          However, it turns out that one of the biggest benefits of JIT compilers in dynamic languages is figuring out the actual types of variables. This is a problem that is theoretically intractable (equivalent to the halting problem) and practically fiendishly difficult to do at compile time for a dynamic language. It is trivial to do at runtime, all you need to do is record the actual types as they fly by. If you are doing Polymorphic Inline Caching, just look at the contents of the caches after a while. It is also largely trivial to do for a statically typed language at compile time, because the types are right there in the source code!

          Jits ARE a win for languages like Lua and JS, because if you’re compiling statically you have to be able to handle any type whenever you dispatch a function. If you’re compiling a static language, they’re a loss, for the most part.