1. 3

    Rust itself runs tests in CI on every pull request that are required to pass. For example, https://travis-ci.org/rust-lang/rust/builds/410084149

    I can see why in a project as young as Rust maintaining different distributions’ forked build and test scripts for the compiler could be prioritized lower than e.g. language-level improvements.

    1. 6

      Yes, Rust CI runs tests on x86_64. Hence lots of breakages on other Debian architectures.

      I agree with you on the priority, but I am of the opinion that you can’t have it both ways. Either Rust should address test failures on other architectures, or Rust should stop claiming C level portability. Rust portability is “almost” there, but not there yet because other architectures are not maintained well.

      1. 12

        Can you please point out where Rust “claims C level portability”?

        1. -3

          This is one of those things where it doesn’t matter what the “official” stance but what the popular interpretation of that stance is. Certainly a number of people are writing Rust replacements for old C bastions, like grep or the coreutils. This along with the RIIR meme spreads the idea that Rust is a viable replacement for C in all situations, including portability.

          If the RESF wants to be clearer about how Rust isn’t ready to replace C yet, they need to be clearer that it’s not ready instead of being silent on the point about portability claims and saying they never said that.

          1. 7

            This is one of those things where it doesn’t matter what the “official” stance but what the popular interpretation of that stance is. Certainly a number of people are writing Rust replacements for old C bastions, like grep or the coreutils. This along with the RIIR meme spreads the idea that Rust is a viable replacement for C in all situations, including portability.

            If the RESF wants to be clearer about how Rust isn’t ready to replace C yet, they need to be clearer that it’s not ready instead of being silent on the point about portability claims and saying they never said that.

            I realize you’re probably just going to go bitch about me on IRC again, but your comment smells like a load of bullshit to me. Rust’s supported platform list looks like a pretty good indicator to me of what the platform support looks like. In fact, it looks like the exact opposite of “silence.”

            saying they never said that.

            I actually didn’t say that. I asked where we claimed C level portability. If we were doing such a thing, I’d want to know about it so we can correct it. Which is exactly the thing you’re blabbering on about. But no. Instead, I get RESF thrown in my face.

            Lose, lose. Thanks for playing.

            1. -1

              Aha, I also see this:

              We do not demand that Rust run on “every possible platform”. It must eventually work without unnecessary compromises on widely-used hardware and software platforms.

              So I guess the current situation with failing tests is entirely intentional but not well-known. Well, it should be better known.

        2. 1

          I agree that Rust needs a better multi-architecture story - as someone who does embedded development, I’ll play with Rust but I’d be very wary of using it “for real” - but lack of serious support for non-x86 [EDIT: non-Windows/-Linux/-Mac] is pretty well-documented.

          [EDITed in response to sanxiyn’s clarification, thanks!]

          1. 1

            Rust Platform Support page is pretty clear. x86 is Tier-1, anything else is not.

      1. 17

        WebUSB is a mistake.

        And with WASM, we will have even less chance of catching malware that can leverage it.

        1. 12

          Do not fear, people will implement WASM time-sharing systems, so you can not only execute random people’s code on your machine, you can also run a WASM anti-virus solution alongside!

          1. 6

            What’s the connection to WASM?

            1. 3

              The dangers of exposing APIs like web USB are compounded with performant and inscrutable blobs run in the browser. Thus, WASM exacerbates these issues.

              1. 13

                Is WASM more inscrutable than obfuscated JS?

                My experience that we suffer far more from the fact that we have no idea when a payload is delivered, since a web server can serve distinct content to every viewer, than we do from the fact that some payloads are difficult to untangle.

                1. 3

                  I’ve seen arguments like this before but never fully understood them. It seems to me like asm.js is just as inscrutable as WASM, but it’s more annoying to work with for a couple reasons:

                  • It’s fast, but somewhat inconsistently so as compared to WASM
                  • Large download size

                  Not to mention all of the minifiers and manglers that exist for conventional JS. Why the WASM hate? It seems more useful to programmers than the alternatives, and we’re already paying the security cost of running untrusted executable code from the internet in browsers today.

                  1. 2

                    asm.js is similarly gross, but people appear to be moving to its successor WASM.

                    Reversing minified and mangled JS is, I submit, a different level of inconvenient from reversing bytecode–especially bytecode that can suddenly leverage other language ecosystems obfuscation tools and technique. Just because they’re different levels of inconvenient doesn’t make one more acceptable than the other.

                    As for the security cost–look, a lot of attacks and nastiness open themselves up once you can leverage that improved performance. Spectre/Meltdown were directly enabled by better performance primitive for timing and shared array buffers, and yet some people refuse to acknowledge the problems they pose by their very existence.

                    I’ve griped about this all before, and at this point I’m basically resigned to the idea that fanboys and nerds more excited about performance and shiny and their chance to leave their teeny mark on the web ecosystem than about user security and rights and conservative engineering are probably going to win on this in the end.

                    :(

                    1. 4

                      I get the woes of security on the web — it’s really, really hard to make running untrusted code secure, especially with the “dancing pigs” problem. My point with asm.js, though, was that WASM doesn’t add anything new: before WASM, people were compiling to a fast subset of JavaScript, and that was equally difficult to decompile. And that really puts the problem squarely back in “running untrusted code securely is hard” camp: if you were a browser vendor, what would you do? Any language will have fast paths (and as a vendor you’re also incentivized to make those paths very fast), and if you enforce running only a single language, people can always compile to the set of operations that are fast in that language. WASM is an improvement over the ad-hoc version, at least.

                      But yeah, definitely get that security on the web is hard :(

              2. 4

                I can see your fear but it might be unfounded. WASM doesn’t have access to all the Web Platforms API, that is not how it works. The WASM “ISA” is specified, it doesn’t have access to stuff outside it, you might be curious to check the specs at https://webassembly.github.io/spec/

                Since the WASM file formats (both the bytecode one and the text one, which is based on S-expressions) are easy to parse, it is not too far-fetched to have static analysers checking the code out.

                WASM doesn’t have access to file system or sockets or even the DOM among other limitations. It is basically a faster way to number crunch and/or port existing code written in other languages. All those side-effecty things need to be proxied over through JS and the Web Platform that will ask permissions and sandbox a ton of it.

                In my humble option, I am much more confortable executing JS/WASM things on the client-side than trusting arbitraty SaaS backends with my data. I know what the Web Platform has access to and what I allow it to peer with.

                I find WebUSB a really nice step forward as it allows WebAuth to provide stronger authentication schemes, which are always a good idea.

                1. 2

                  Thanks for the link. I was wanting to learn more about it. The intro is really good, too. Many desirable properties. I bet it was hard to design trying to balance all of that. Usually, that also means a formal specification might uncover some interesting issues.

              1. 3

                I agree with a lot of the points raised in the article: microservices/SOA is ultimately more about scaling your engineering team than scaling your backend. But I’d add one more useful reason for SOA, though, that’s applicable even when you’re small: simple, robust fault tolerance. If one service is failing, you can circuit-break and isolate the failure to that particular set of functionality; when everything’s deployed as a monolith, if one piece of code starts acting up it can be very difficult to prevent the damage from spreading. For example, if someone ships a bug that chews through all the IOPS available on your EC2 machines, or exhausts all available file descriptors, or writes logs faster than you can rotate them, or… etc; it’ll hose even the working code if it’s deployed as one big bundle that services any kind of request. With small, independent services running on separate machines, you can isolate these kinds of issues much more cleanly and keep most of the backend up even when something’s gone wrong with a single service. It’s obviously not perfect — what if what’s broken is something centralized like your deployment tools? — but it minimizes a lot of otherwise-scary bugs in practice.

                1. 6

                  it’ll hose even the working code if it’s deployed as one big bundle that services any kind of request.

                  You can still have separate machines that only handle certain types of requests/are optimised for differen workloads, with a monolithic codebase.

                  I quite like building things that way, and basically build each “microservice” as a separate library, to enforce modularity. Then, link them all together. These can also be tested separately. I admit I have not tried this at massive scale, however.

                  1. 2

                    On top of alva’s comment, some setups will have features to both restrict resource use and detect craziness that indicates a bug. They’ll take action ranging from notifying an admin to halting the application. Instances not using that buggy functionality will be unaffected. From there, the admins might put in a temporary filter for packets calling that function that makes them fail fast before even reaching the instances’s buggy code. This is removed after the application is patched.