1. 6
  1.  

  2. 2

    I could see this as wanting to keep the implementation as simple as possible, so the question becomes: would we actually want this safety built in or is it enough to put the whole thing into a “secure box”?

    1. 4

      A core design principle of web assembly was that it be able to provide a target for more or less any language. That meant the language object model would be opaque to the VM, it also means life time of an allocation is opaque to the VM. The result of that is that the WASM VMs basic memory model has to be a single blob of addressable memory. Some of this is also because of the Mozilla “JS subset” wasm implementation that operated on typed arrays.

      This brings with it other constraints - no builtin standard library, no standard types, no object introspection, and because it’s intended to be used in browser the validation and launch must be fast as caching of generated code is much less feasible than in a native app - hence the control flow restrictions.

      The result of all of this is that you can compile Haskell to wasm without a problem, or .NET, or C++ and they all run with the same VM, and none of them incur unreasonable language specific perf penalties (for example you cannot compile Haskell to the CLR or JVM without a significant performance penalty), but compiling to WASM works fine. C/C++ can treat pointers and integers as interchangeably and unsafely as they like, without compromising the browser. And .NET and JVM code can apparently (based on other comments so could be totally wrong here) run in that WASM VM as well.

      1. 1

        We of course also want inside-box safety. The question is cost and tradeoff.

        1. 1

          If Java and .NET do it just fine (and they do), there’s no perf cost excuse there.

          1. 1

            No, they don’t. C++ code compiled with /clr:safe does slow down. (It doesn’t slow down without the option, but it doesn’t provide inside-box safety either.)

            1. 1

              Compared to /clr:pure yes, due to some optimisations missed on earlier .NET CLR versions. (the move to Core alleviated most of that overhead… but initially came with a removal of C++/CLI outright before it was added back), and of course, all C#/F# code runs with those checks enabled all the time.

              Having the option is always better than not having it for things like this though.

            2. 1

              There’s a significant penalty to languages with significantly different type systems when running under .NET and the JVM. That’s why you tend to get similar, but slightly different, versions of languages - Scala, F#, etc instead of Haskell/*ML - basically the slight differences are changes to avoid the expensive impedance mismatch from incompatible type systems. The real Haskell type system cannot be translated to either .NET or the JVM - even with .NET’s VM level awareness of generic types - and as such takes a bunch of performance hits. Trust me on this.

              Similarly compiling C and C++ to .NET requires sacrificing some pointer shenanigans that wasm allows (for better or for worse).