1. 42
    1. 20

      The next step is to remove the web browser from the equation so that we can deliver experiences without worrying about the DOM. Maybe we’ll build something called “Just a Runtime Environment” (JRE), or maybe “Collaborative Language Runtime” (CLR) and then developers could have the experience of “write once, run anywhere” convenience.

      Everything old is new again, though I will grant, the browser has gotten the farthest in the “everybody can run it” sweepstakes. Pity the DOM and JS are such crap, though.

      1. [Comment removed by author]

    2. 9

      This is a win, but not for Mozilla in terms of Mozilla gaining something. It’s a win for openness and interoperability :-)

    3. 28
      Mozilla may have won, but all of us have lost.

      WebAssembly (for brevity, WASM) is a terrible idea being carried through with excellent vision and engineering effort. It has the following fundamental issues that nobody seems to care about:

      It further erodes the transparency to end-users. A process that began with minification and concatenation will finish running once WebAssembly is in place. Without a full debugger and decompiler, end-users will lose the ability to easily look at what their browsers are loading and executing. The user browser moves further and further from being a document viewer into a beachhead of control on the user’s machine.

      It further erodes the freedom of end-users. It will be no surprise in the next few years when DRM (enabled via code signing and other supporting technologies) is rolled out even further into the browsers. This will prevent users from running their own modified scripts on pages, from tweaking via proxy other scripts, and from being able to selectively replace things in their software stack that don’t work. It’s hard now, but it’s only going to get worse, because of “security concerns” (read: copyright enforcement).

      It massively fragments the development ecosystem. At a time when anybody with half a brain realizes that the churn and balkanization of the Javsacript world is a pox, we are embracing the idea that somehow everything will be better once browser software can be written in Python, Clojure, Ruby, C, C++, D, Rust, Common Lisp, Elixir, Erlang, Forth, FORTRAN, COBOL, and who knows what else. Man-centuries of effort will be wasted bringing the tooling in each of those communities up to something resembling the current state of the frontend, all so people can avoid spending a couple of weeks learning the vulgar lingua franca of Javascript. This is not progress.

      It reduces the transferability of front-end skill sets. For better or worse, most front-end skills transfer between gigs. A blob of jQuery and React and Angular looks roughly similar across jobs: arrays and strings work the same, functions and closures work the same, and so forth. WASM will remove this convenience. Ever had to relearn a codebase written in CoffeeScript instead of boring ES5? Or even learn the quirks of ES6 instead of ES5. Now, imagine you have to learn an entire new programming language just to do the same goddamn thing you were doing ten years ago. This makes switching jobs and even hiring devs harder if the stack doesn’t use boring JS.

      It decreases the pressure to fix JS and DOM APIs. Every problem in CS is solvable with another layer of abstraction–the “benefit” of WASM in providing the “if you don’t like this API, use this other language to hack around it!” escape hatch. Except, that usually leads to stagnation in the base layer. JS’s own standard library is a poverty-ridden ball of shit precisely because TC39 and others have focused on interesting language features and support for compiler wankery at the expense of the needs of the working developer.

      It solves problems that we don’t have. The majority of web traffic is boring CRUD operations, despite what the snake-oil salesmen at conferences and vendors want to tell you. The stuff that WASM would truly benefit? “low-level” coding like for games and whatnot? All of that is handled better in native code already. If performance is the important thing you pull on your big kid pants and write C/C++/D/Rust and if you’re unlucky some assembly or you busy out some CUDA or OpenCL. If performance isn’t important, you just write in JS. This magical market segment of “Man, I really want to run heavily compilers and numerical codes in the browser” doesn’t really exist.

      It creates bad vendor incentives. Remember up above when I complained about transparency for users? So, imagine a future job where a contractor delivers a pile of WebAssembly to make a site work. Said vendor dies, or stops working on the project, and now the client needs to do maintenance on it. How’s that going to work again? What about when the vendor bakes in a killswitch to disable the software if they aren’t kept on a “maintenance” contract? Again, opaque binary, what do?

      It is fundamentally incompatible with the licensing and learning norms of the Web. At some level in our gut, we all knew that people could see how our code worked, and that we could benefit from learning how their code worked. This sort of informality allowed us all to speak more freely and frankly about our code and our practices, and for two decades the web flourished. So, all the webshits have played fast and loose (BSD/Apache/etc.) with open source licensing because at the end of the day, it was trivial to look at the client code and see how it worked. It was also easy to spot when somebody was using code we’d written, or using a particular library–the “moral rights” of authorship were not so threatened. WASM kills this freedom.

      ~

      Of course, none of this matters, since FB and GOOG are going to use WASM to stream ads to people faster and watch them more closely, and every webshit will lap it up as progress and folks giving conference talks will sell more tickets on their new framework for language XYZ for doing web shit.

      Seriously, fuck this myopic industry.

      1. 28

        Can’t agree with most of your points.

        Have you tried decompiling Java classes? The code produced by the decompilers is very clear, and easy to read. Sometimes more clear than trying to beautify minified code. Unless the bytecode is obfuscated, but JS code can be obfuscated as well.

        DRM is only dangerous if you want to participate in it. You can always reject DRM content.

        It might fragment the ecosystem, but sticking to one solution is hardly progress as well. Also there are lots of people for which JavaScript is the biggest reason they stay away from web development.

        About reducing transferability – the same can be told about normal, ‘desktop’ world, and people manage to cope with it. Some classes of applications require one set of languages (java, scala), others require different languages (c, c++). I don’t really see many problems with this, languages are different tools for different jobs, one language for everything doesn’t exist (can’t exist).

        WASM fixes JS by replacing it. That’s the best way of fixing JS from my point of view! ;)

        And what do you mean we don’t have problems – there are lots of them. Downloading a minified javascript source that was transpiled from typescript and running it seems like a big hack. The speed of more complicated web applications is slow like hell on a new i7 CPU (google docs, google maps). We have so many problems it’s a disaster.

        Have you tried reading the source code of Google Docs? I bet you know that there are lots of scripts nowadays that start with “var fpa=function(){var a=_.ft.T”, finishing with the same set of trash. Compilation to bytecode alone isn’t denying the access to source, because bytecode still can be decompiled.

        1. 2

          The code produced by the decompilers is very clear, and easy to read.

          How many man-decades of work were put into that tooling, one wonders? Also, “easy to read” != “easy to maintain and modify”.

          DRM is only dangerous if you want to participate in it. You can always reject DRM content.

          Yeah, like the W3C did with–oh wait, no, they rolled over on their fucking belly. Well, at least Intel and Apple and Nvidia–wait, shit, they did too. And Elvis didn’t do no drugs!

          sticking to one solution is hardly progress as well

          Vanilla JS has worked decently for 20 years. SQL for over 40. Sometimes a working solution is enough.

          This constant neophilia is killing us, and observations like yours are predicated on the idea that somehow we must keep resolving solved problems or we aren’t making “progress”. Shovels haven’t changed significantly in two thousand years–does that mean progress with civil engineering stopped?

          the same can be told about normal, ‘desktop’ world, and people manage to cope with it.

          They coped with it by killing native apps and moving onto the Web. And are trying to move the Web back to the desktop/mobile/server with “cross-platform” JS debacles like Electron and NW.JS.

          The speed of more complicated web applications is slow like hell on a new i7 CPU (google docs, google maps).

          Works on my machine, sorry I guess? Most users don’t even know what slow is, compared to ten years ago (or God forbid 20!).

          And those big companies are going to write even more slow and bloated shit once they have WASM, because they can bring over their crufty codebases wholesale.

          1. 11

            This constant neophilia is killing us, and observations like yours are predicated on the idea that somehow we must keep resolving solved problems or we aren’t making “progress”.

            If you disregard the actual improvements a new technology makes, you can easily dismiss it as “change for the sake of change”, or as you say, “neophilia”. I see this comment every day on the Internet, like this fellow who claimed that someone used Rust for a project just to be “buzzword compilant”.

            People don’t agree that these problems are “solved” and I welcome their attempts to fix the status quo.

            1. 0

              Well, first, that fellow wasn’t necessarily wrong in pointing out that wrapping a perfectly functioning C/C++ glob in Rust didn’t immediately make sense.

              People don’t agree that these problems are “solved” and I welcome their attempts to fix the status quo.

              Do you do web shit? Do you do frontend web shit? Do you do application programming in an ecosystem that changes rapidly?

              If you answered “no” to any of the above questions then it is no surprise that you would have that (misguided and incorrect) opinion. In the abstract, sure, we can all talk about the magic pixie dust of progress and finding better tools, but in the concrete here and now it’s yakshaving on a grand scale.

              1. [Comment removed by author]

                1. 1

                  The more I read @angersock replies, the more I have the feeling that his opinion resonates with mine.

                  I have seen few arguments that improvements in the, what angersock calls “webshit”, platform or its “standard” libraries, are actually improvements. This is what I think is the “buzz”.

                  I applaude that you are a committer to Node.js, but tell me, as a PL researcher, what does Node have to offer more than a wrapper around libev using the V8-macro system (called Javascript) above some clever trick in a language that inherits more cruft than craft?

                  My opinion is that WebAssembly, if it is a real assembly language, is only great in the sense that we, as a community, provide for more work for ourselves than there is people available to do. Who is actually going to clean all this shit up, instead of beliebing in pseudo-improvements and magification?

                  1. 8

                    I applaude that you are a committer to Node.js, but tell me, as a PL researcher, what does Node has to offer more than a wrapper around libev using the V8-macro system (called Javascript) above some clever trick in a language that inherits more cruft than craft?

                    I’m not a fan of Node or of languages without strong static typing. I am interested in WASM for its potential to exorcise them from the ecosystem (or at least from the parts I have to personally use).

                    My opinion is that WebAssembly, if it is a real assembly language, is only great in the sense that we, as a community, provide for more work for ourselves than there is people available to do. Who is actually going to clean all this shit up, instead of beliebing in pseudo-improvements and magification?

                    Sorry, I am not 100% sure what you’re trying to say here. Restate please?

                    1. 1

                      Basically, what I mean to say is that a lot of code will be written, people will find tricks and clever hacks and eventually WebAssembly also lags behind, just to maintain some sort of compatibility.

                      From my own experimence, I work with clients that run 16-bit Windows applications and even DOS stuff, reverse engineering them and applying bytecode patches just to work out latency or ISA bugs.

                      That stuff will also happen with WebAssembly, giving us huge amounts of cruft. It already happened with JS: it’s a language that cannot evolve by compatibility constraints, hence a lot of “standard library” solutions are given to “fix” these problems. They all just build up.

                      I can’t even open some websites on my 7 year old laptop, because of the shit and crap trashing the CPU. But running (possibly more advanced in terms of functionality) old DOS apps still work fine.

                      Are we going to maintain every script or binary to “keep up” with what you argued against, the neophillists? No. We just start new platforms all the time, so we do not have to deal with the “old shit” anymore. That is my point of our industry/community: we keep adding until we lost control, and we start all over again! It’s what keeps us in busyness.

                      If that’s a recipe for disaster, I would rather believe in a “community” that keeps “improving” to hide this fact. Way more comfortable. As if our actual problems magically dissapear with each new layer of crap.

                  2. 0

                    provide for more work for ourselves than there is people available to do.

                    I think you hit this right on the mark.

                    The webdevs are setting themselves up for life.

                2. 0

                  Thanks for the background information, that makes your stated opinion have better context.

                  Argument-by-neophilia is about as abstract as it gets.

                  If that was your takeaway of my argument, there’s been a miscommunication. The context (without rehashing all the same tired shit everybody’s been saying about churn in JS) is that in the web front-end (meaning Javascript and the DOM), there are too many choices and they are changing too rapidly. If you care to examine any concrete cases like ES6 vs. ES5 (and the incomplete browser support without toolchains of same), or the evolution of Angular 1.x vs 2.x, or React vs. Vue vs. React+redux vs whatever, or Webpack vs. Browserify vs. Grunt vs. Gulp vs. Make, it’s pretty fucking obvious there’s a problem.

                  If you’re going to argue that we don’t need a particular new tool, then your argument must take into consideration the actual concrete improvements that tool brings to the table.

                  Who gives a shit about the improvements when seemingly half of the tools are spent cleaning up the deficiencies of other tools, and themselves creating new deficiencies?

                  Like, there’s some brain cancer among the web dev community that getting one little improvement is somehow worth the cost in retooling, retraining, and rewriting. It’s absurd.

                  Arthur C. Clarke wrote about this already.

                  1. 4

                    The context is that in the web front-end (meaning Javascript and the DOM)

                    The person you were replying to was replying to you saying that WASM “massively fragments the development ecosystem” and that you don’t like browser software being written in other languages. As someone interested in advancing the state-of-the-art in PL, I am naturally against saying JavaScript (or SQL, which you also mentioned) is good enough.

                    I am happy to re-focus on web front-end though. I love the constant buzz, personally. If I have a problem with my tool to the point where I am willing to rewrite code, or am writing new software, there is the chance that I can find a better solution. If I’m happy with what I have, then I ignore it. Where’s the downside?

                    For example, I used to use jQuery. After many years I was fed up enough with its limitations that I evaluated all of the new front-end libraries and settled on React for a new project. I still use React years later. I still use jQuery years later too! Vue appears to be popular now, but I have no particular reason to look into it, and don’t consider its existence a problem.

                    The inclusion of ES6 on your list is particularly baffling to me because it is backwards compatible with ES5 and comes with some sorely needed improvements like arrow functions. And ES7 asynchronous functions are useful enough that I’ve been using them with Babel since before they were standardized (node callbacks suck). It’s one thing to be angry about breaking backwards compatibility, but this?

                    Who gives a shit about the improvements when seemingly half of the tools are spent cleaning up the deficiencies of other tools, and themselves creating new deficiencies?

                    It’s not a given that a tool creates new deficiencies. Tech criticism is important so we as a community can decide which tools are worth taking the plunge for and which aren’t. I don’t want to look into every new GitHub project either! But an over general position that we shouldn’t try to improve on the status quo for fear that we might make a mistake isn’t the solution either.

                    Since you mentioned Make, I’ll throw out that I learned about tup 5 years ago and haven’t looked back. Builds faster, automatically discovers dependencies, detects build inconsistencies, built-in watch capabilities, simpler configuration, scriptable in Lua if you need the power. Well worth the time I’ve invested, and I’m glad the author didn’t decide that Make was final form of build systems.

                    1. 1

                      The inclusion of ES6 on your list is particularly baffling to me because it is backwards compatible with ES5 and comes with some sorely needed improvements like arrow functions.

                      C++ is “backwards compatible” (mostly kinda sorta) with previous versions of itself, and even with C if you squint a bit–that doesn’t mean that there isn’t a whole bunch of legacy code that also has to be loaded into the mental model of the developer when debugging or expanding.

                      As for arrow functions, they look really spiffy on a slide at a conference, but most of the time their absence doesn’t hurt. In fact, their presence hurts, because it screws up function naming in the callstack. They have different rules for this. They don’t have an arguments variable.

                      We’ve doubled the cognitive load of making functions in JS, just because it looked like a cute idea at the time. If we deprecated the function keyword, sure, but that isn’t what they did.

                      Javascript classes are a similar misfeature, fixing a problem that didn’t really exist.

                      But an over general position that we shouldn’t try to improve on the status quo for fear that we might make a mistake isn’t the solution either.

                      Again, that’s not the optics to view the problem under. We can and should try to improve the status quo–Promises, for example, were a good addition to Javascript that fit neatly in the existing language framework and solved a real problem (callback hell).

                      It’s not that I’m saying “we’re worried we’ll make a mistake”. I’m saying “we’re going to bury ourselves under a mountain of incompatible solutions and make extra work for ourselves”.

                      How would you propose to allay my concern?

                      And remember, my concern is not allayed by saying “Well, any individual tool…” because in the value system I’m representing I don’t care about how perfect any individual tool is. I care about having to know and maintain proficiency in lots of different tools for only negligible cost.

                      There is nothing in the recent history of web front-ends or PL circles to suggest conservatism of the sort I’m advocating. There is nothing to suggest a modicum of restraint or taste. There is nothing to suggest that there is anything other than bloated innovation fueled by burning VC money and hopes of getting a conference talk.

                      This…this must be how suckless.org and OpenBSD people feel. :(

                      1. 7

                        Having different rules for this is a big part of the reason arrow functions are useful. And you can’t deprecate the function keyword because the old this behavior is necessary for object methods. If you think that suggests a fundamental problem with the design of JS, then welcome to the club: it’s why I’m excited for WASM.

                        Sorry you feel so pessimistic about the future of software. I don’t think there is anything I can say to allay your concerns.

                      2. 2

                        There is nothing in the recent history of web front-ends or PL circles to suggest conservatism of the sort I’m advocating. There is nothing to suggest a modicum of restraint or taste. There is nothing to suggest that there is anything other than bloated innovation fueled by burning VC money and hopes of getting a conference talk.

                        How about Go? Its design seems to be full of restraint and tasteful minimalism, and it’s quite successful. It’s not a great hit in the academic PL circles, possibly exactly for these reasons.

                      3. 1

                        There is nothing in the recent history of web front-ends or PL circles to suggest conservatism of the sort I’m advocating. There is nothing to suggest a modicum of restraint or taste.

                        I had great hopes that Scala was going into the right direction, but it has become exceedingly clear that the faction that cared about quality, minimalism, orthogonality and user experience lost completely against those who want to add more and more features.

          2. 4

            How many man-decades of work were put into that tooling, one wonders? Also, “easy to read” != “easy to maintain and modify”.

            I don’t think this issue should be thought in such category: “imagine how good JS would be if everyone would just focus on JS, instead of trying to invent different things”. We would be still in the stone age with very sharp rocks if that would be the case! ;) This is the same argument against GNU/Linux and its world with lots of distributions. Well, imagine how a GNU/Linux distro would be awesome if everyone would just focus on one distribution instead of forking new ones? It doesn’t work this way, everyone has different needs. One requires the distribution to be fully automatic, another one requires it to be mostly manual. Same with development ecosystem.

            Vanilla JS has worked decently for 20 years. SQL for over 40. Sometimes a working solution is enough.

            Should we switch back to Fortran? It worked well. Also I would argue about JS working decently in first years if its existence. Before jQuery and similar, writing a multibrowser JS code sometimes required to simply write multiple copies of the same code for different JS dialects in different browsers.

            They coped with it by killing native apps and moving onto the Web. And are trying to move the Web back to the desktop/mobile/server with “cross-platform” JS debacles like Electron and NW.JS.

            What apps are you talking about? My file manager? My virtualization platform? My terminal emulator? My Office suite? My music player? My video editor? My games? The browser itself is a native app as well. There are web alternatives for some apps, but they feel like demo versions of the “real deal”.

            Also, “native guys” aren’t trying to bring web apps to the desktop. The “web guys” are trying to push it; people who got into web development now want to create standalone desktop apps, so they’re bringing their environment with them. I don’t think this is a bad thing. The more people are in the development world, the better. Now Electron is a trend, but this trend will eventually finish and evolve, we’ll see how. Maybe the trend would be to rewrite Electron apps into native versions, who knows. The thing is that popularity of Electron isn’t something that ends some kind of an era. It’s a tool that helps web guys to reach over the desktop world.

            Works on my machine, sorry I guess? Most users don’t even know what slow is, compared to ten years ago (or God forbid 20!).

            Well it doesn’t work properly on my machines. I’m not satisfied and I think it should be better. Also I don’t really care if someone doesn’t realize web apps are slow. I realize this, and this is what matters to me.

            And those big companies are going to write even more slow and bloated shit once they have WASM, because they can bring over their crufty codebases wholesale.

            Agree, this will happen. But it’s already happening with JS, so I don’t really see any changes here.

            1. 1

              “imagine how good JS would be if everyone would just focus on JS, instead of trying to invent different things”

              See, that’s incorrect. You’ve overlooked the entire category of things you build with JS–the things we build with the tools. The tools should not be the primary, or even secondary, objective. A better statement would be “imagine how much more cool applications and stuff we could build if we didn’t keep retooling”.

              Should we switch back to Fortran? It worked well.

              For serious numerical work, that’s still the tool used very frequently. Because it works. BLAS/LAPACK are proof of the value of not fucking around greatly with tooling once you have something that works well enough.

              What apps are you talking about? My file manager? My virtualization platform? My terminal emulator? My Office suite? My music player? My video editor? My games?

              Google apps and Dropbox would be the obvious counterexample to you here.

      2. 12

        This magical market segment of “Man, I really want to run heavily compilers and numerical codes in the browser” doesn’t really exist.

        I want to make games and run them in the web, so that people don’t need to download them and they run on all devices.

        I also want to use the browser for genetic programming.

        For both of these endeavors I am bottlenecked by the JS memory model, so I would prefer to use something like Rust.

        Do I not exist?

        1. -1

          For both of these endeavors I am bottlenecked by the JS memory model, so I would prefer to use something like Rust.

          How? How exactly are you bottlenecked?

          1. 5

            If you want to make games that run at 60fps, uncontrollable stop the world garbage collection is far from ideal and introduces hiccups.

            Even if you avoid all allocation at runtime I would still like more control over how my memory is managed.

            As for genetic programming, the more cycles you can squeeze out of your code, the better the result you can produce. You can only optimize your JavaScript so far. I want more cycles.

            1. 0

              It is entirely possible to write arena allocators in JS, even going so far as to supporting malloc-style tricks using TypedArrays if you really want to go there.

              In the GP case, you probably should be using real iron once you’ve settled on a proof-of-concept.

              1. 9

                […] if you really want to go there.

                Who wants to actually go there? After some point, you should stop adding hacks to a system, and just create a new system. I see WebAssembly as a response to the overwhelming amount of things being forced onto/into JavaScript.

              2. 4

                In the GP case, you probably should be using real iron once you’ve settled on a proof-of-concept.

                I’d have said that a while back. These days I wouldn’t because of all the good demos of tech that are usable as a web application without installing anything. A GP demo that took no upfront investment might convince some people to dig deeper into it. It might need to be a web application for a lot of them these days. That means it also needs to be fast given more CPU = better results. Stuff like that might justify a faster web solution.

      3. 3

        Thanks for that. I have to admit I’d not seen this coming. I thought WASM would simply be a vehicle for other application types we otherwise wouldn’t have seen to come to the browser, not an excuse to throw out the entire web ecosystem and replace it with native compiled everything.

        Do you actually think people will throw out the baby with the bathwater like this? I’d bet there’d be some sincere resistance across the board if they tried.

        1. 2

          Do you actually think people will throw out the baby with the bathwater like this?

          Have you met the web developer community?

          It’s a constant struggle of wits–the half-wits and the nit-wits and the dim-wits.

          And I say this as somebody whose livelihood depends on that ever-increasing fuckery.

          1. 3

            Sympathies. Sincerely. I couldn’t do professional web dev nowadays, I think I’d go mad.

            I was thinking though - mobile. I don’t see WebAssembly taking over the mobile platform - the performance constraints should keep it away for a long time to come.

            1. 0

              Ah, that’s the insidiousness of it, though!

              The big plug will be this:

              “Use WASM and you’ll get better performance/battery-life/etc. since our phone/browser has a compiler/interpreter that magically knows how to optimize the bytecode for the platform it’s running on.”

              Or worse, you could expect that with code-signing and modular scripts, browser vendors (read: GOOG) would be enabled to offer specific fallbacks for modules matching the signature of jQuery or something. Magically, there is more vendor lock-in.

              We fell for AMP, why not this?

      4. 2

        If performance is the important thing you pull on your big kid pants and write C/C++/D/Rust and if you’re unlucky some assembly or you busy out some CUDA or OpenCL. If performance isn’t important, you just write in JS. This magical market segment of “Man, I really want to run heavily compilers and numerical codes in the browser” doesn’t really exist.

        While I agree with you in principle, I like to hold out the hope that in the future we might see AAA games released, not for Windows, or macOS, or Linux…but for “the Web”. And then anyone can play.

        1. 3

          And then anyone can play.

          Sure, and then if the server goes away, they never get to play again. Let’s put our playthings into the state of permanent transience!

          I have a copy of Half-Life that’s nearing 20 years old, and it still works. I can still bust it out at LAN parties.

          AAA games in the browser only serve to further indenture and impoverish users.

          1. 2

            This already happened to me with a game where I used my Xbox Live profile to play single player. I temporarily lost Live access. Then, I couldnt use any single player progress: restart with offline account or only use offline accounts to begin with. Ridiculous.

          2. 2

            Well, that’s not necessarily true. It’s possible to make games that would be written for “the Web”, and would even work at LAN parties or merely locally. Of course, the question is then whether or not people/companies would make games in such a fashion.

        2. 2

          And then anyone can play.

          Anyone who has a browser which implements WASM correctly, completely, and with the needed efficiencies in graphics and processing, input devices, etc. Will that include TempleOS or Haiku? Probably not.

          1. 5

            True, but I’d say Haiku will get a correct WASM environment long before it can run native Windows binaries. Baby steps. :)