1. 7

  2. 9

    I’d be interested if they could back this up with code and measurements.

    They make a couple of strong statements that seem to be based on empirical experiments, but they do not share their raw data or how to reproduce this result. Without source code, it’s very hard to take these claims at face value.

    The author doesn’t even mention which language they were using, and if this needed a large JavaScript runtime to go with it, etc.

    1. 1

      I agree. The only reason I don’t dismiss it outright is because it’s Daniel Lemire.

    2. 7

      This is the wrong question to ask. WebAssembly can’t be slower or faster than Javascript; different implementations of WebAssembly or JavaScript runtimes can be slower or faster than each other. Javascript runtimes saw enormous improvements in performance over a period where the language saw relatively little change. It’s very likely that the relative performance of future runtimes could change a lot; WebAssembly is much newer than Javascript and as far as I know doesn’t even have any JITs, but it’s also possible that potential performance gains are smaller there.

      It’s true that it there might be some fundamental challenges that make implementing a fast interpreter or compiler relatively more difficult. Garbage collection poses challenges for some workloads that want to be “fast” or at least pause-free, so WebAssembly is attractive to many developers for that reason. I’m sure there exists at least some optimization opportunities for higher-level javascript that don’t exist in WebAssembly. But none of these points–which are vastly more nuanced than “X is faster than Y”–are addressed here.

      Further, there’s no mention of the exact toolchain used here. My understanding from dabbling in Rust targeting WebAssembly that changes in compiler versions, settings, and optimization tools can make enormous differences in both speed and compiled size; I know some people are reporting much better results from using the direct wasm32-unknown-unknown target instead of using emscripten, which this benchmark is likely to use.

      1. 4

        There’s no data attached to this article; all we are given are vague descriptions like “we found Microsoft Edge to be quite terrible” and “JavaScript is superbly fast.” They didn’t discuss their method for coming up with these conclusions, their testing methods, what kinds of areas they were testing, or any info at all really other than vague statements.

        Until some of these things are provided, there’s no reason to take anything written in this article as anything more than baseless claims. I’m pretty surprised, seeing as this was written by a university professor, to see these claims made with absolutely no support. Even if it’s a “teaser” for some paper coming later, why post this now and just ask everyone to take it on faith?

        1. 3

          I wonder if they were sending wasm.js over every time as well.

          1. 2

            As usual the answer seems to be “no”.

            1. 1

              Not at all surprising. JIT will beat AOT on long-running performance, higher level representations will be smaller than lower level. Start-up time is the only place that AOT will win, but if code is delivered over the network the benefits are easily lost. This is without considering the past decade of intense competition between some of the largest tech companies on their js performance.

              1. 2

                I would think that in theory statically typed languages can achieve higher overall performance than dynamically typed language, mainly for the reason that types provide more actionable information to the compiler, and might move many decisions from runtime to compilation. This is why I believe that in the long term, WASM will definitely outperform JS. Another thing to consider is that the profiling-based compilers (collect runtime data and further optimize based and usage patterns) can also be applied to WASM, so there is an opportunity for WASM to run even faster than native code thanks to dynamic optimization. It’s still relatively new tech, but I think it will bring the Web to its next level as a platform.