1. 4

    Show of hands, who is still using Scala and what for?

    1. 15

      I’m still using it, have complicated feelings, and am not sure for how much longer. But I am quite impressed with the work they’re doing for Scala 3 and root for them to make it a better language and ecosystem.

      That said, I find that you show up in every single post about Scala here to say something negative or at least fish for negativity pretty off-putting and negative. If you don’t like Scala, you’re welcome to abstain from discussions and instead focus on positive new developments in other languages you do like.

      1. 5

        ✋ using Scala for Spark, some web services deployed to Kubernetes, and some CLI tools. It’s definitely my primary language for building my own more complex tools, but I reach for Ruby and Shell for smaller stuff and Rust when I feel like it. I maintain a ~20,000 SLOC Rails app for a side business and have on more than one occasion threatened to rewrite the damned thing in Scala because the problems I continually encounter are problems solved from inception in Scala (e.g. type safety). It won’t happen because my partner strongly prefers Ruby and Rails, though.

        I’m excited about Scala 3 primarily because of the improvements to implicits. Breaking them in to clearer keywords is a dramatic improvement to explicit expressiveness, something I really value about languages. When my team at a former employer adopted Scala, we had rules limiting our use of implicits because they were such a sharp edge. That was 2012-2013 and the tooling just wasn’t helpful in tracking down compilation problems and runtime errors. I’ve taught Scala to dozens of people, mostly with a Java, Ruby, or C++ background, and the two hardest concepts to convey were implicits and co- and contravariance.

        1. 3

          Medium-sized software company in the telecom sector (~1k employees). Maybe 100-200 full time Scala devs here. It’s not exclusive, there’s a lot of Python, Java, Ruby, JS etc. We maintain a tech radar to keep track of what languages we use, since code runs inside containers on cloud VMs it can be in Fortran for all I care as long as there’s a team (or two) to support it.

          I’ve always enjoyed the language, but its principal downside is that it is very easy to turn it into Haskell and it becomes very hard to move people or teams around. So you have to keep watch that people aren’t creating free monad algebras just because they can. Advanced purely functional Scala code might be totally inaccessible to even an experienced developer coming from another language, and that is a detriment, because sometimes you just need to jump into some maintained codebase, find a bug, and fix it. Granted, Scala 3 will make this easier, there are some much welcome changes in it. Scala, as a language, itself isn’t difficult, it usually takes a week or so to get a Java developer productive with it (maybe more with different backgrounds).

          Then again I’ve always considered languages as tools for building interesting things. Productivity is what counts. The moment my team is more productive in another language we could switch if it’s worth the investment.

          Software doesn’t really have to be that good to be useful. Speed of development and iteration is more important than the strength of the type system. Sometimes it’s just cheaper to fix bugs due to bad language design than it is to understand abstractions in an expressive type system.

        1. 4

          I’ve been writing software professionally for RISC-V for two years now, and it’s been really satisfying watching the architecture grow in capability and community over that time. Probably the best part, especially in these circumstances, is knowing that the fate of the architecture isn’t tied to the success or failure of any one company. If anything, the lull gives us time to stop sprinting and focus on fundamentals, which is super important when so much of what we do has some aspect of “reinventing the wheel”.

          Here’s to many more years, and here’s to open standards!

          1. 2

            I’m curious, if you’re able or willing to disclose: where do you work? What are you writing that’s targeting RISC-V? How big is your team? How many others do you estimate are also targeting RISC-V? I have no idea to what extent RISC-V is seeing uptake in the industry, and am genuinely curious to learn.

            1. 4

              I’d rather not disclose my employer or details related to them, but as for the last part of your question:

              There’s a massive demand throughout the industry for RISC-V, but it’s all invisible unless you’re in and around R&D departments, mostly for embedded products. Right now, everyone’s trying to build up their platforms on top of RISC-V to eventually enable their end application. That includes everything from boot flows, to tuned linear algebra and DSP libraries, to whatever RTOS the company’s applications are all written for, and so on. It’s taken us a decade to get to where we are just because of the sheer magnitude of the variety and height of all these bits of the software stack. It’s turtles all the way down (hence my username, haha).

              As these software stacks mature over the coming decade, you’ll probably see more and more products running on RISC-V processors. Or you might not even notice, since they’ll be going into places you don’t even think about computers existing. The motor controller in your cordless drill, your TV’s remote control, and of course the wave of IoT you didn’t ask for.

          1. 20

            This is technically 100% true but completely misses the point (ahem). Yes, Android and iOS sandbox their apps and have a better security model to prevent applications from accessing eachother’s data. But the entire reason this is necessary is because you’re essentially running untrusted, user-hostile applications on your device. To me that’s pure madness and a never-ending arms race which, indeed, requires very high levels of security. Besides, many apps ask for way too many permissions (the typical example being a flashlight app requiring access to your contact lists and network access), and a majority of users are happy to just click OK, anyway, because it’s entirely unclear what they are saying OK to.

            Now, there’s still something to be said for sandboxing even of trusted applications, to prevent them from accessing your data after having been exploited through a vulnerability. For this, I’d love to see something like OpenBSD’s pledge on Linux.

            I, for one, am very happy that there are new developments outside the Android/iOS duoculture.

            The non-software parts of the article do make some sense, because we still cannot trust hardware manufacturers, but this just supports my initial point: if you can trust the software (or firmware) not to be actively spying on you, a lot of these security measures are unnecessary. Then, kill switches just become some additional measure to know for a fact that your device isn’t accidentally recording, and to protect against external actors trying to track you via your bluetooth/wifi MAC address.

            1. 10

              But the entire reason this is necessary is because you’re essentially running untrusted, user-hostile applications on your device. To me that’s pure madness and a never-ending arms race which, indeed, requires very high levels of security.

              Applications do not need to be user-hostile to be considered untrusted. They just need not to be formally proven.

              Once you notice this, you can see the value of mitigation (such as done by openbsd, pledge/unveil/W^X/layout randomization), sandboxing (as done by android, docker) and systems designed for enabling actual least-privilege, which pretty much implies capabilities (not to be confused by POSIX capabilities) and a pure microkernel, multiserver design.

              This is why Google is working on Fuchsia (as they’ve hit the limits on what can be done with Linux), and Huawei on HarmonyOS.

              1. 8

                This is why Google is working on Fuchsia (as they’ve hit the limits on what can be done with Linux), and Huawei on HarmonyOS.

                I wonder how many of those limits are related to GPL2 more than technical reasons…

                1. 8

                  If if was about the license, they’d just save effort by reusing bsd/mit license code.

                  1. 3

                    Yes, I suspect porting the Android userland over to a BSD derivative kernel would be a vastly easier task than writing a whole new OS. FreeBSD already has a linux-compatible ABI that supposedly works fairly well, if that’s even necessary.

                2. 5

                  They just need not to be formally proven.

                  Just insecure. Formally proven apps can be insecure if what’s proven doesn’t block the attack vector. An easy one is formal correctness or memory safety not stopping information leaks from shared resources. Even if software does, the continuous number of hardware-based leaks means verified software is no guarantee.

                  That we probably won’t see them get that under control means anything on complex, insecure hardware must be considered compromised with security measures just limiting damage of this versus that component, attacker, or whatever.

                  Edit: Your other comment mentioned seL4 is proven to do separation. It’s proven to do so under a number of assumptions. Some are false. So, it can’t do separation in those cases.

                  1. 5

                    Just insecure.

                    Right. My intent was to say that a non-formally-proven app should always be assumed insecure, and sandboxed accordingly.

                    Which doesn’t go to say we should disable the sandboxing when the app gets formally proven; We should be sandboxing everything that can be sandboxed. Working with capabilities is the only way I see going forward.

                    1. 4

                      That all sounds much better. Layer by layer, component by component, prevent what we can, and catch the rest. :)

                  2. 3

                    This is why Google is working on Fuchsia (as they’ve hit the limits on what can be done with Linux)

                    Fuchsia smells like a Senior developer retention program to me.

                    Given Google’s focus, even if that project was serious, what’s the change it would still be alive 3 years after shipping?

                    1. 8

                      Fuchsia’s been active for several years now, the android runtime has been running on it for a few years also, and it has been active throughout.

                      I very much doubt that it isn’t the operating system Google plans to use as base for pretty much everything in the not so distant future.

                      1. 2

                        You mean like Google+?

                  3. 6

                    But the entire reason this is necessary is because you’re essentially running untrusted, user-hostile applications on your device.

                    Suppose you’re wrong, just once, and some FOSS developer betrays your trust. Maybe not even that - maybe the download server for one of their dependencies gets trojanised and it’s built into a binary you’re running.

                    Problem 1: If the app wasn’t sandboxed you have now been comprehensively owned. Every secret on your account is now void.
                    Problem 2: There’s no reason you would notice if problem 1 occurs.

                    I too am pleased to see developments outside Android/iOS but I can’t get on board with the notion that we should lower our guard because software hasn’t come from a corporation with corporate interests.

                    1. 3

                      Fair point!

                      Now as far as I can tell, Android is Linux, so it shouldn’t be fundamentally impossible to port the Android security model / sandboxing system to Librem, even though this post seems to imply that Android is somehow completely different from it and inherently more secure.

                      1. 2

                        As far as I can tell, there’s so little “Linux” in Android you really would need to do a lot of hard work to get a similar system running.

                        And all the different little Linux-based phones would have to set aside their differences concerning distros and whatnot and agree on how the kernel should be patched to enable whatever features this new userland requires.

                        Sadly many of these Linux-based phones run old Android kernels because the hardware manufacturers never open-sourced their drivers.

                        It’s a complete world of pain and grief, which no one would invest in because the market leaders are so huge.

                        Despite that I’m still a Sailfish user.

                        1. 2

                          Sadly many of these Linux-based phones run old Android kernels because the hardware manufacturers never open-sourced their drivers.

                          Somewhat true. They open sourced their kernel forks, but not the userland drivers.

                    2. 1

                      100% agree on this.

                      I rather use unsandboxed applications from people I trust than sandboxed applications from people I don’t trust. It’s the same for distribution repos vs. Flatpak.

                    1. 3

                      This is very nice in theory and having a simple queuing mechanism without any additional infrastructure is appealing. BUT, it plays poorly with connection pooling; if you’re using it in the JVM the driver you linked to has several issues and is not fully compatible with the official Postgresql JDBC driver; that driver also seems to have undefined or hard-to-understand behavior in regards to network partitions and dropped connections; notifications can be “blocked” by long-running queries in the same connection; while it is useful to take advantage of transactions in the context of sending messages, transactions are per-connection and this can play poorly with the aforementioned problem of queries sharing the same connection.

                      All in all, this feature was useful but a bit painful and we ultimately ended up dropping it.

                      1. 23

                        The cynicism in this thread is pretty stunning. Described here is a plan to design and implement a fine-grained, library-level capability-based security system for a platform that’s seeing massive adoption. While this isn’t novel from a research perspective, it’s, as far as I can tell, the first time this stuff is making it down from the ivory tower into a platform that’ll be broadly available for application developers.

                        Lots of folks seem pretty scornful about the amount of stuff in current browsers, even though non-browser use cases are explicitly highlighted in this piece. There’s an implication that this is a reinvention of things in BSD, even though this approach is explicitly contrasted with OS process-level isolation, in this piece. There’s the proposed alternative of simply only using highly trusted, low-granularity dependencies, which is fair, but I think that ship has sailed, and additionally it seems like avoiding the problem instead of solving it.

                        I’m a bit disappointed here that the reaction to set of tools and standards that might allow us developers to write much safer code in the near future has garnered this kind of reaction based on what I see as a cursory and uncharitable read of this piece.

                        1. 10

                          and additionally it seems like avoiding the problem instead of solving it.

                          Avoiding a problem is the best category of solution. Why expend resources to fix problems you can avoid in the first place?

                          1. 7

                            I believe “avoiding” here was meant rather as “closing eyes to [the problem]”, than the more virtuous “removing the existence of [the problem]”.

                            In a world where even SQLite was found to have vulnerabilities, I believe any alternative solution based on some handwaved “highly trusted huge libraries” is a pipe dream. Please note, that in actual high trust systems, AFAIK limiting the permissions given to subsystems is one of the basic tools of the trade, with which you go and actually build them. A.k.a. “limiting the Trusted Computing Base”, i.e. trying to maximally reduce the amount of code that has access to anything important, isolating it as much as possible from interference, and then verifying it (which is being made easier by the amount of it needing verification being reduced through the previous step).

                            If you’re intrested in recent projects trying to break into mainstream with the capabilities-based approach, such as suggested in the OP IIUC, see e.g.: seL4, GenodeOS, Fuchsia, Pony language.

                            That said, I’m not an expert. I very much wonder what’s the @nickpsecurity’s take on the OP!

                            1. 2

                              Been pulling long shifts so initially no reply. Actually, I agree with some that this looks like a marketing fluff piece rather than a technical write-up. I’d have ignored it anyway. Ok, since you asked… I’d like to start with a bottom-up picture of what security means in this situation:

                              1. Multicore CPU with shared, internal state; RAM; firmware. Isolated programs sharing these can have leaks, esp cache-based. More attacks on the way here. If SMP, process-level separation can put untrusted processes on their own CPU and even DIMM’s. The browsers won’t likely be doing that, though. We’ve seen a few attacks on RAM appear that start with malicious code running on the machine, too. These kinds of vulnerabilities are mostly found by researchers, though.

                              2. OS kernel. Attacked indirectly via browser functionality. No different than current Javascript risk. Most attacks aren’t of this nature.

                              3. Browser attack. High risk. Hard to say how WebAssembly or capability-based security would make it different given the payload is still there hitting the browser.

                              4. Isolation mechanisms within the Javascript engine trying to control access to browser features and/or interactions between Javascript components. Bytecode Alliance sounds like that for WebAssembly. Note this has been tried before: ADsafe and Caja. I think one got bypassed. Bigger problem was wide adoption by actual users.

                              So, the real benefits seem to be all in 4 leaving open 1-3. Most of the attacks on systems have been in 3. Protecting in 4 might stop some attacks on users of web pages that bring in 3rd-party content. Having run NoScript and uBlock Origin, they do argue that a lot of mess could be isolated at JS/WA level. Stuff does slip through often due to being required to run the sites.

                              You see, most admins running those sites don’t seem to care that much. In many cases, they could avoid the 3rd-party risks but bring them in anyway. The attack opening could come in the form of stuff from them that the user allowed which interacts with malicious stuff. Would capability-secure, WA bytecode help? Idk.

                              Honestly, all I can do is illustrate where and how the attacks come in. We can’t know what this can actually do in practice until we see the design, the implementation, a list of all the attack classes, and cross-reference them against each other. It’s how you evaluate all of these things. I’d like to see a demo blocking example attacks with sploit developers getting bounties and fame for bypasses. That would inspire more confidence.

                            2. 4

                              Agreed, this is no silver bullet.

                              WASM is nice and has lots of potential. A lot of people when first seeing the spec also thought, “cool that’ll run on the server too over time”. There are use cases where it is interesting. I use it in my projects here and there.

                              But come on, grand plans of secure micro-services and nano-services made from woven carbon fiber nano-tubes, and a grandious name BytecodeAlliance. It’s like a conversation from the basement of numerous stoner parties. I’m pretty sure a lot of people here have been writing p-code ever since, well, p-code.

                              I don’t want to undermine the effort of course, and all things that improve tooling or foster creativity are greatly appreciated.

                              They should down the rhetoric and maybe work with the WASMR folk. Leaping out with a .org after your name comes with a great deal of responsibility.

                              There are lots more to do with WASM that needs doing than simply slapping it in a server binary and claiming it is a gift from heaven, when the actual gift from heaven was the Turing Machine.

                              All that said, please keep it up. The excitement of new generations for abstract machines is required, meaningful and part of the process.

                              1. 2

                                Almost every programming language and set of tools uses third-party dependencies to some extent these days. There are clear benefits and increasingly clear costs to this practice, with both contingent on tooling and ecosystem. I agree that the node.js model of a sprawling number of tiny dependencies is a quite poor equilibrium; the Rust world seems to be taking this to heart by rolling things back into the standard library and avoiding the same kind of sprawl.

                                Regardless, third-party dependencies from heterogenous sources will continue to be used indefinitely, and having tools to make them safer and more predictable to use will probably bring significant benefits. This kind of tool could even be useful for structuring first-party code to mitigate the effects of unintentional security vulnerabilities or insider threats.

                                Saying “just avoid doing it” is a lot like abstinence-only sex ed: people are almost certainly still going to do it, so instead we should focus on harm reduction and a broad understanding of risks.

                                1. 1

                                  but that ship has sailed remember

                                2. 5

                                  I’ve given this some more thought, and I think the reason this raises my eyebrows is a combination. It’s a long article on a PR-heavy site that tries to give the impression that this very young technology (WASM) is the only way of solving this problem, it doesn’t mention prior art or previous working attempts (try this on the JVM), and it doesn’t acknowledge what this doesn’t solve and the implications of this model (now you can’t trust any data that comes back from your dependencies, even if they’re well-behaved in the eyes of the Alliance).

                                  Every new thing in security necessarily gets a certain amount of cynicism and scrutiny.

                                  1. 9

                                    So, drawing a comparison to the JVM is fair.

                                    IMO, the key differences are:

                                    • How lightweight it is. The overhead of loading the v8 runtime is tiny by comparison to the JVM (on my machine, it takes less than 30ms to run a ‘hello world’ node process).
                                    • That it has been designed for embedding in other programs. Embedding a JVM in a non-java project is nontrivial at best; embedding a working wasm runtime in a c project is only slightly harder than, say, lua.
                                    1. 2

                                      True, true. I was thinking something like an experiment to demonstrate how nanoprocess-ifying known vulnerable or malicious dependencies (within the JVM runtime) solves a security issue.

                                  2. 3

                                    We are at the mocking level of acceptance! This means that it will be a thing, a real thing in just a matter of months!

                                    1. 0

                                      Web is gonna Web, so we all know how this will work out.

                                      “Web standards experts” continuously add stuff, and then the next decades is spent plugging the holes they opened, because once it’s added “we can’t break the web”.

                                      If they were supposed to design a toothbrush, people would die.

                                    1. 2

                                      Location: Portland, OR, USA | Remote

                                      Type of Work: Software Engineer

                                      Hours: Full Time, Contract

                                      Contact: PM or stephen.judkins@gmail.com

                                      Description: I’ve spent the last four years writing mostly Scala but have broad experience in the software industry, and am willing to work with all sorts of technologies. I have written front-end code in the past. I’m currently running a small team that focuses on developer ergonomics, build tooling, large-scale refactoring, and support of other developers, but have led teams that have worked with non-technical stakeholders and delivered features. I value clear, robust, reliable, and efficient code. I understand and am comfortable dealing with cryptography-related code. I am familiar with performance-sensitive networking and could be valuable in that context.

                                      1. 4

                                        C: 0.73 new features per year, measured by the number of bullet points in the C11 article on Wikipedia which summarizes the changes from C99, adjusted to account for the fact that C18 introduced no new features.

                                        adjusted to account for the fact that C18 introduced no new features.

                                        And that is why I love C. Yes, it has its problems (of which there are many), but it’s a much smaller, bounded set of problems. The devil I know. Many other languages are so large, I couldn’t even know all of the devils if I tried.

                                        1. 25

                                          The devil I know

                                          if the devil you know is an omnipotent being of unlimited power that generations of warriors have tried to fight and never succeeded because it’s just too powerful, then I would argue that it might be worth trying to chose a different evil to fight.

                                          Even in 2019 70% of security vulnerabilities are caused by memory-safety issues that would just not happen if the world wasn’t running on languages without memory-safety.

                                          1. 1

                                            I don’t think being memory safe is enough for a programming language to be a good C replacement.

                                            1. 18

                                              No. It’s not enough. But IMHO it’s required.

                                              1. 4

                                                … a requirement that C, incidentally, does not fulfil. Now that memory-safe low-level languages have swum into our ken, C is no longer a good C replacement. ;-)

                                                Edited to add a wink. I meant this no more seriously than pub talk – though I believe it has a kernel of truth, I phrased it that way mainly because it was fun to phrase it that way. There are many good reasons to use C, and and I also appreciate those. (And acknowledge that the joke does not acknowledge them.)

                                                1. 3

                                                  that is my point.

                                                  1. 2

                                                    Hi, sorry, I spent a lot of time on my edit – everything below the line plus the smiley above it wasn’t there when you replied. Sorry to readers for making this look confusing.

                                                    It is indeed your point, and I agree with it.

                                              2. 6

                                                Nobody is arguing that it’s sufficient, but it is necessary.

                                                If I were to develop a new language today, a language that was as unsafe as C but had lots of shiny new features like ADTs and a nice package manager and stuff, I’d never get traction. It would be ridiculous.

                                                1. 1

                                                  I don’t know. PHP and C are still pretty popular. You just target those markets with selective enhancements of a language that fits their style closely. ;)

                                              3. 1

                                                Doesn’t web assembly allow unchanged C to be memory safe?

                                                1. 2

                                                  Sort of, but not really. Unmanaged C isn’t allowed to escape the sandbox it’s assigned but there is still plenty of opportunities for undefined behavior. Process-level isolation in OSes provide similar guarantees. In the context of WebAssembly, even if the TLS stack were segregated into its own module it would do nothing to mitigate a Heartbleed-style vulnerability.

                                                  1. 2

                                                    There are other environments where the C standard is vague enough to allow for C to compile to a managed and safe environment. As the local AS/400 expert, C there compiles to managed bytecode, which is then compiled again by a trusted translator.

                                                    1. 1

                                                      I try to keep up with folks’ skills in case opportunities arise. Do you know both AS/400 and z/OS? Or just AS/400?

                                                      Also interested in you elaborating on it making C safer.

                                                      1. 3

                                                        No, z is a separate thing I don’t know much about.

                                                        Because C on AS/400 (or i, whatever IBM marketing calls it this week) is managed code, it does things like checking the validity of pointers to prevent things like buffer overflows. It does that by injecting hardware-enforced tagging. To prevent you from cheating it, the trusted translator is the only program allowed to generate native code. (AIX programs in the syscall emulator, however, can generate native code, but are then subject to normal Unix process boundaries and a kernel very paranoid about code running in a sandbox.) The tags are also used as capabilities to objects in the same address space, which it uses in place of a traditional filesystem.

                                                        1. 1

                                                          Thanks. That makes sense except for one thing: hardware-enforced tagging. I thought System/38’s hardware enforcement was taken out with things just type- or runtime-checked or something at firmware/software level. That’s at least how some folks were talking. Do you have any references that show what hardware checking the current systems use?

                                                          1. 1

                                                            No, tags and capabilities are still there, contrary to rumours otherwise.

                                                            The tagging apparatus on modern systems are undocumented and as a result I know little about them, but it’s definitely in the CPU, from what I’ve heard.

                                                            1. 1

                                                              Ok. So, I guess I gotta press POWER CPU experts at some point to figure it out or just look at the ISA references. Least I know there wasn’t something obvious I overlooked.

                                                              EDIT: Just downloaded and searched the POWER ISA 3 PDF for capabilities and pointer to see what showed up. Nothing about this. They’re either wrong or it’s undocumented as they told you. If it’s still there, that’s a solid reason for building critical services on top of IBM i’s even if commodity stuff had same reliability. Security be higher. Still gotta harden them, of course.

                                                2. 1

                                                  Sort of. It had a much larger set of problems than safe, system languages that compete with it. There’s less to know with them unless you choose to dance with the devil in a specific module. Some were more productive with faster debugging, too. So, it seems like C programmers force themselves to know and do more unnecessarily at least on language level.

                                                  Now, pragmatically, the ecosystem is so large and mature that using or at least outputing C might make sense in a lot of projects.

                                                1. 2

                                                  Neat! Any idea what’s happening for HTTP/3 & QUIC?

                                                  1. 2

                                                    The RFC describes how WebSocket frames are to be encapsulated in HTTP/2 frames, but does not specify how HTTP/2 frames are represented or transmitted. So I assume any HTTP/2 implementation support WebSockets will continue to support WebSockets when adding HTTP/3 or QUIC support. I would wager that most major HTTP/3 implementations will support both WebSockets and QUIC.

                                                    1. 1

                                                      Sorry, I’ve no extra insight.

                                                    1. 2

                                                      On the HN version, syrusakbary chimed in to add they were working on something similar with a focus on maintainability. They also got Ngnix running. Reposting here in case any wasm fans find it interesting.

                                                      1. 3

                                                        Their claims seem to be exaggerated. They don’t have fork() or sigaction() working, it’s not a very useful server without them. Wasmjit is also focused on maintainability, I’m not sure what they mean by that.

                                                        1. 1

                                                          Wasmjit: it promised something similar to our goals, but after digging and trying to contribute to it we realized that all the architecture-specific instructions were hardcoded into the runtime itself. This meant a very steep process to implement WebAssembly instructions, and a considerable effort when reaching other architectures (basically the same effort as creating an IR engine like LLVM by hand). And sadly, the library didn’t have either a single test.

                                                          Seems pretty clear-cut to me, though I can’t comment on its veracity.

                                                          1. 2

                                                            I was referring to their claims on “creating the first WebAssembly runtime able to run Nginx” which is exaggerated since they haven’t implemented a whole slew of critical functionality.

                                                        2. 3

                                                          They also posted something today about it and a quick comparaison to the other project including wasmjit: https://medium.com/@syrusakbary/running-nginx-with-webassembly-6353c02c08ac

                                                        1. 7

                                                          This is the wrong question to ask. WebAssembly can’t be slower or faster than Javascript; different implementations of WebAssembly or JavaScript runtimes can be slower or faster than each other. Javascript runtimes saw enormous improvements in performance over a period where the language saw relatively little change. It’s very likely that the relative performance of future runtimes could change a lot; WebAssembly is much newer than Javascript and as far as I know doesn’t even have any JITs, but it’s also possible that potential performance gains are smaller there.

                                                          It’s true that it there might be some fundamental challenges that make implementing a fast interpreter or compiler relatively more difficult. Garbage collection poses challenges for some workloads that want to be “fast” or at least pause-free, so WebAssembly is attractive to many developers for that reason. I’m sure there exists at least some optimization opportunities for higher-level javascript that don’t exist in WebAssembly. But none of these points–which are vastly more nuanced than “X is faster than Y”–are addressed here.

                                                          Further, there’s no mention of the exact toolchain used here. My understanding from dabbling in Rust targeting WebAssembly that changes in compiler versions, settings, and optimization tools can make enormous differences in both speed and compiled size; I know some people are reporting much better results from using the direct wasm32-unknown-unknown target instead of using emscripten, which this benchmark is likely to use.

                                                          1. 2

                                                            A secondary market for concert/event tickets could bypass the pretty extortionary and abusive middlemen that currently dominate that market. Of course that depends on: a stable value-coin; participation of event promoters; dramatically increased scalability, reliability, efficiency and usability of blockchain and off-chain tools.

                                                            I think anything like this, that depends on allocating scarce intangible assets, presents a compelling use case. But all this is contingent on dramatic improvements in technology. Right now there are some really interesting things going on in the Ethereum world involving consensus, sharding, and scalability that, if promises are lived up to, might get us closer to where we’d need to be. As far as I can tell most people involved in other cryptocurrencies space are unhealthily fixated on asset prices and speculation, probably to the detriment of making meaningful progress into building these things into useful tools.

                                                            1. 3

                                                              I kind of agree, but a “stable value-coin” is a “trusted third party”, I think.

                                                              1. 5

                                                                Yes. This is an absolutely unavoidable point that people need to accept and not try to find a clever technical solution for, because none exists! The US dollar is useful to people largely because the federal reserve has a staff that produces and considers numerous reports of real-world consumer prices and adjusts policy to keep them changing at a modest and predictable rate. There is no way a fixed set of rules in a blockchain can achieve this: we are intractable bound to these real-world institutions if we want the stable prices average citizens will demand. Cryptocurrency enthusiasts very slowly re-learning–or refusing to learn–these basic tenets of monetary policy is frustrating.

                                                                Given this, I feel that being able to trust arbitrary counterparties for a some variety of transactions could still be a very useful tool in the future.

                                                                1. 3

                                                                  I think for at least the early ones it was less ignorance of monetary policy, and more a fundamental philosophical disagreement.

                                                              2. 1

                                                                Can’t they just sell it centralized online with a limit for each registered customer? Could even bootstrap it at one event by giving customers cards with unique codes on them for use with an app and/or web site. They get them with instructions when they come in. That establishes the unique ID’s that are used to buy tickets for future events.

                                                                Whatever is left of the problem should be minimal. If each supplier does this, third parties will show up selling them a solution trying to grab the market. It will get cheaper for those selling tickets.

                                                              1. 4

                                                                This is a very interesting project I’m excited to follow as it develops.

                                                                I’d love to know more, however, on how IPC is implemented such that the performance penalty doesn’t make this prohibitively slow versus monolithic kernels. I know that syscalls in Linux, for example, require context switches that are very expensive relative to a lot of operations. I’ve done some fruitful optimizations that involve simply minimizing the number of syscalls. But the design of Fuchsia implies a much higher number of inter-process context switches. By reading the design docs, it appears sending a message across a TCP connection over wifi could require messages to be sent over several isolated processes (netstack, ethernet driver, WLAN MLME driver, softmac driver). In Linux this would be a single syscall.

                                                                I won’t naively assume that syscalls are inherently expensive, and I assume Fuchsia’s syscalls are much cheaper than Linux’s. But is this quantified yet? If Fuchsia’s are much cheaper, what are the architectural and implement decisions that were made to take it true? How does Fuchsia differ from Linux in this respect?

                                                                1. 15

                                                                  It appears that this release will run arbitrary JVM bytecode using the system JDK, if included in a Blu-Ray ISO: git.videolan.org As far as I can tell, it uses a SecurityManager to attempt to sandbox. Here’s a summary of the efficacy of this approach: https://tersesystems.com/blog/2015/12/29/sandbox-experiment/

                                                                  I don’t see any information on even a cursory security audit of this component. Is this alarming to anyone else?

                                                                  1. 5

                                                                    Is this alarming to anyone else?

                                                                    They put Java into some standard for Blu-Ray. A lot of places and things were using it thanks to the big, marketing push back in the day. Then, to use that or part of it you need to run Java. As usual with Java, using it is a security risk. This kind of thing happening in tech or standards designed by big companies for reasons other than security is so common that it doesn’t even alarm me anymore. I just assume some shit will happen if it involved codecs or interactive applications.

                                                                    Old, best practice is to run the Internet apps and untrustworthy apps on dedicated box. Netbooks got pretty good. Substitute VM if trading for cost/convenience. Mandatory access control next with low assurance. You’re totally toast next.

                                                                    1. 5

                                                                      Like @nickpsecurity said, it is used in lot of places so not including it is same as raising middlefinger to your users (they cannot watch their expensive discs) and they go and use some other application, which probably is even less secured. We really cannot make ordinary users stop wanting to use their goods because now we know that they are insecure. Having secure system does not matter if no one uses it.

                                                                    1. 5

                                                                      Li Haoyi has a fantastic piece that’s simultaneously best explanation of how Scala’s de facto official build tool SBT works and also a great description of its fundamental problems: http://www.lihaoyi.com/post/SowhatswrongwithSBT.html

                                                                      I share most of his frustrations, both about the fundamental design issues and the incidental issues the plague users. However, I think most build systems are pretty dreadful. The only one I’ve used that I’ve been really enthusiastic about is Nix, but it still has many incidental issues that should be addressed.

                                                                      In general, I think any build tool that thinks of things in terms of a mutable directory full of files that need to be poked and prodded with tools in the correct order is not the way, fundamentally, we should be thinking about things. Thinking of things in terms of a chain of pure functions could make these things conceptually simpler, faster, and more reliable.

                                                                        1. 3

                                                                          Quasar implements continuations by rewriting JVM bytecode. The goal here appears to be to implement similar functionality at a lower level and let existing JVM bytecode transparently access it. It’s not entirely clear from this proposal how different the implementation will be, however.

                                                                        1. 39

                                                                          This is a misleading headline. Twitter has announced they are implementing a separate, explicitly experimental compiler from scratch. I don’t think it’s fair to characterize this as a fork.

                                                                          1. 7

                                                                            Even more: this compiler will not support all of Scala features (they don’t know yet which features will be dropped from support).

                                                                            1. 1

                                                                              Which is basically forking it, since it’s going to have it’s own set of features that are a subset of Scala.

                                                                          1. 9

                                                                            Idris looks really well designed, and I think these improvements are actually quite significant. Strictness by default is a game-changer for me; apparently the records and monads are more convenient to use (and there are effects, too? Not sure how experimental they are). If Idris was self-hosted, produced good static binaries with performance comparable to OCaml, and had a package manager I would definitely give it a serious try.

                                                                            1. 6

                                                                              Idris also has a quite buggy implementation at the moment, but like everything else you mentioned, it is a solvable problem. I think it’s a contender for a widely used industrial language in the future. Though at the moment it’s mainly used by people with pretty sophisticated FP knowledge, I think its dependent types and effect system may ultimately become something that’s easier for newcomers to understand than a lot of Haskell is.

                                                                              1. 7

                                                                                They are pretty unapologetic about 1.0 not being industry-grade, and it is not quite the goal of the language:

                                                                                Will version 1.0 be “Production Ready”?

                                                                                Idris remains primarily a research tool, with the goals of exploring the possibilities of software development with dependent types, and particularly aiming to make theorem proving and verification techniques accessible to software developers in general. We’re not an Apple or a Google or [insert large software company here] so we can’t commit to supporting the language in the long term, or make any guarantees about the quality of the implementation. There’s certainly still plenty of things we’d like to do to make it better.

                                                                                All that said, if you’re willing to get your hands dirty, or have any resources which you think can help us, please do get in touch!

                                                                                They do give guarantees for 1.0:

                                                                                Mostly, what we mean by calling a release “1.0” is that there are large parts of the language and libraries that we now consider stable, and we promise not to change them without also changing the version number appropriately. In particular, if you have a program which compiles with Idris 1.0 and its Prelude and Base libraries, you should also expect it to compile with any version 1.x. (Assuming you aren’t relying on the behaviour of a bug, at least :))

                                                                                Don’t get me wrong, I believe Idris is a great language precisely because of that: they want to be primarily a research language, but provide a solid base for research happening on top of their core. They have a small team and use those resources well for one aspect of the language usage. I would highly recommend having a look at it and working with it, this is just something to be aware of.

                                                                                from https://www.idris-lang.org/towards-version-1-0/

                                                                                1. 5

                                                                                  Haskell is great because it’s where a lot of these ideas were tested and figured out. But it also has the cruft of legacy mistakes. Haskell can’t get rid of them now, but other languages can certainly learn from them.

                                                                              1. 2

                                                                                Example 2 will not compile with the options -Xlint -Xfatal-warnings, which I recommend everyone use. We have a quite large codebase, and it hasn’t been arduous to keep these settings on. Failed exhaustiveness checks are sadly only usually a warning, so I highly recommend people keep this setting turned on.

                                                                                Both wartremover [https://github.com/wartremover/wartremover] and scapegoat [https://github.com/sksamuel/scapegoat] can prevent example 1, by preventing Serializable (or Product or AnyRef) from being inferred. I understand that “use a third-party linter” isn’t the answer a lot of people want, but it’s quite easy to set up and integrate into an SBT workflow.

                                                                                As far as the other problems, they are all syntax-related, and I grant Scala has some annoying ambiguities there.

                                                                                1. 1

                                                                                  So this is where I get confused. What the license seems to be saying is that if I engage Facebook in litigation over some kind of patent of theirs, the license that they are granting me (to use the software freely, etc.) is revoked. If that happens, then what? Would it then be technically illegal to use React? If so, under what law – copyright law? Is this really enforceable?

                                                                                  What I’m really getting at is, is RMS being a hard-ass and this is actually pretty typical, or is Facebook being sly and hiding restrictions in their software that don’t need to be there?

                                                                                  1. 3

                                                                                    Yes, the license is enforced under copyright law. It’s the same mechanism the GNU license uses to enforce its restriction that you must distribute source code with your binary. If you don’t fulfill the requirements of the license, then you don’t have a license to use the code any more.

                                                                                    What the “first generation” of open source licenses doesn’t deal with is patent rights. Which means you can end up with a license to copy the code but not a license to use the code. Various attempts have been made to address that, including this FB language.

                                                                                    Whether and how all this enforceable is the eternal question of all open source licenses, and indeed all licenses. :)

                                                                                    1. 2

                                                                                      No. The patent grant is an additional grant of rights to React users, above and beyond the BSD license. The patent grant may be revoked if you sue (or countersue) Facebook, but the original BSD license cannot be revoked.