1. 3

    This is very nice in theory and having a simple queuing mechanism without any additional infrastructure is appealing. BUT, it plays poorly with connection pooling; if you’re using it in the JVM the driver you linked to has several issues and is not fully compatible with the official Postgresql JDBC driver; that driver also seems to have undefined or hard-to-understand behavior in regards to network partitions and dropped connections; notifications can be “blocked” by long-running queries in the same connection; while it is useful to take advantage of transactions in the context of sending messages, transactions are per-connection and this can play poorly with the aforementioned problem of queries sharing the same connection.

    All in all, this feature was useful but a bit painful and we ultimately ended up dropping it.

    1. 23

      The cynicism in this thread is pretty stunning. Described here is a plan to design and implement a fine-grained, library-level capability-based security system for a platform that’s seeing massive adoption. While this isn’t novel from a research perspective, it’s, as far as I can tell, the first time this stuff is making it down from the ivory tower into a platform that’ll be broadly available for application developers.

      Lots of folks seem pretty scornful about the amount of stuff in current browsers, even though non-browser use cases are explicitly highlighted in this piece. There’s an implication that this is a reinvention of things in BSD, even though this approach is explicitly contrasted with OS process-level isolation, in this piece. There’s the proposed alternative of simply only using highly trusted, low-granularity dependencies, which is fair, but I think that ship has sailed, and additionally it seems like avoiding the problem instead of solving it.

      I’m a bit disappointed here that the reaction to set of tools and standards that might allow us developers to write much safer code in the near future has garnered this kind of reaction based on what I see as a cursory and uncharitable read of this piece.

      1. 10

        and additionally it seems like avoiding the problem instead of solving it.

        Avoiding a problem is the best category of solution. Why expend resources to fix problems you can avoid in the first place?

        1. 7

          I believe “avoiding” here was meant rather as “closing eyes to [the problem]”, than the more virtuous “removing the existence of [the problem]”.

          In a world where even SQLite was found to have vulnerabilities, I believe any alternative solution based on some handwaved “highly trusted huge libraries” is a pipe dream. Please note, that in actual high trust systems, AFAIK limiting the permissions given to subsystems is one of the basic tools of the trade, with which you go and actually build them. A.k.a. “limiting the Trusted Computing Base”, i.e. trying to maximally reduce the amount of code that has access to anything important, isolating it as much as possible from interference, and then verifying it (which is being made easier by the amount of it needing verification being reduced through the previous step).

          If you’re intrested in recent projects trying to break into mainstream with the capabilities-based approach, such as suggested in the OP IIUC, see e.g.: seL4, GenodeOS, Fuchsia, Pony language.

          That said, I’m not an expert. I very much wonder what’s the @nickpsecurity’s take on the OP!

          1. 2

            Been pulling long shifts so initially no reply. Actually, I agree with some that this looks like a marketing fluff piece rather than a technical write-up. I’d have ignored it anyway. Ok, since you asked… I’d like to start with a bottom-up picture of what security means in this situation:

            1. Multicore CPU with shared, internal state; RAM; firmware. Isolated programs sharing these can have leaks, esp cache-based. More attacks on the way here. If SMP, process-level separation can put untrusted processes on their own CPU and even DIMM’s. The browsers won’t likely be doing that, though. We’ve seen a few attacks on RAM appear that start with malicious code running on the machine, too. These kinds of vulnerabilities are mostly found by researchers, though.

            2. OS kernel. Attacked indirectly via browser functionality. No different than current Javascript risk. Most attacks aren’t of this nature.

            3. Browser attack. High risk. Hard to say how WebAssembly or capability-based security would make it different given the payload is still there hitting the browser.

            4. Isolation mechanisms within the Javascript engine trying to control access to browser features and/or interactions between Javascript components. Bytecode Alliance sounds like that for WebAssembly. Note this has been tried before: ADsafe and Caja. I think one got bypassed. Bigger problem was wide adoption by actual users.

            So, the real benefits seem to be all in 4 leaving open 1-3. Most of the attacks on systems have been in 3. Protecting in 4 might stop some attacks on users of web pages that bring in 3rd-party content. Having run NoScript and uBlock Origin, they do argue that a lot of mess could be isolated at JS/WA level. Stuff does slip through often due to being required to run the sites.

            You see, most admins running those sites don’t seem to care that much. In many cases, they could avoid the 3rd-party risks but bring them in anyway. The attack opening could come in the form of stuff from them that the user allowed which interacts with malicious stuff. Would capability-secure, WA bytecode help? Idk.

            Honestly, all I can do is illustrate where and how the attacks come in. We can’t know what this can actually do in practice until we see the design, the implementation, a list of all the attack classes, and cross-reference them against each other. It’s how you evaluate all of these things. I’d like to see a demo blocking example attacks with sploit developers getting bounties and fame for bypasses. That would inspire more confidence.

          2. 4

            Agreed, this is no silver bullet.

            WASM is nice and has lots of potential. A lot of people when first seeing the spec also thought, “cool that’ll run on the server too over time”. There are use cases where it is interesting. I use it in my projects here and there.

            But come on, grand plans of secure micro-services and nano-services made from woven carbon fiber nano-tubes, and a grandious name BytecodeAlliance. It’s like a conversation from the basement of numerous stoner parties. I’m pretty sure a lot of people here have been writing p-code ever since, well, p-code.

            I don’t want to undermine the effort of course, and all things that improve tooling or foster creativity are greatly appreciated.

            They should down the rhetoric and maybe work with the WASMR folk. Leaping out with a .org after your name comes with a great deal of responsibility.

            There are lots more to do with WASM that needs doing than simply slapping it in a server binary and claiming it is a gift from heaven, when the actual gift from heaven was the Turing Machine.

            All that said, please keep it up. The excitement of new generations for abstract machines is required, meaningful and part of the process.

            1. 2

              Almost every programming language and set of tools uses third-party dependencies to some extent these days. There are clear benefits and increasingly clear costs to this practice, with both contingent on tooling and ecosystem. I agree that the node.js model of a sprawling number of tiny dependencies is a quite poor equilibrium; the Rust world seems to be taking this to heart by rolling things back into the standard library and avoiding the same kind of sprawl.

              Regardless, third-party dependencies from heterogenous sources will continue to be used indefinitely, and having tools to make them safer and more predictable to use will probably bring significant benefits. This kind of tool could even be useful for structuring first-party code to mitigate the effects of unintentional security vulnerabilities or insider threats.

              Saying “just avoid doing it” is a lot like abstinence-only sex ed: people are almost certainly still going to do it, so instead we should focus on harm reduction and a broad understanding of risks.

              1. 1

                but that ship has sailed remember

              2. 5

                I’ve given this some more thought, and I think the reason this raises my eyebrows is a combination. It’s a long article on a PR-heavy site that tries to give the impression that this very young technology (WASM) is the only way of solving this problem, it doesn’t mention prior art or previous working attempts (try this on the JVM), and it doesn’t acknowledge what this doesn’t solve and the implications of this model (now you can’t trust any data that comes back from your dependencies, even if they’re well-behaved in the eyes of the Alliance).

                Every new thing in security necessarily gets a certain amount of cynicism and scrutiny.

                1. 9

                  So, drawing a comparison to the JVM is fair.

                  IMO, the key differences are:

                  • How lightweight it is. The overhead of loading the v8 runtime is tiny by comparison to the JVM (on my machine, it takes less than 30ms to run a ‘hello world’ node process).
                  • That it has been designed for embedding in other programs. Embedding a JVM in a non-java project is nontrivial at best; embedding a working wasm runtime in a c project is only slightly harder than, say, lua.
                  1. 2

                    True, true. I was thinking something like an experiment to demonstrate how nanoprocess-ifying known vulnerable or malicious dependencies (within the JVM runtime) solves a security issue.

                2. 3

                  We are at the mocking level of acceptance! This means that it will be a thing, a real thing in just a matter of months!

                  1. 0

                    Web is gonna Web, so we all know how this will work out.

                    “Web standards experts” continuously add stuff, and then the next decades is spent plugging the holes they opened, because once it’s added “we can’t break the web”.

                    If they were supposed to design a toothbrush, people would die.

                  1. 2

                    Location: Portland, OR, USA | Remote

                    Type of Work: Software Engineer

                    Hours: Full Time, Contract

                    Contact: PM or stephen.judkins@gmail.com

                    Description: I’ve spent the last four years writing mostly Scala but have broad experience in the software industry, and am willing to work with all sorts of technologies. I have written front-end code in the past. I’m currently running a small team that focuses on developer ergonomics, build tooling, large-scale refactoring, and support of other developers, but have led teams that have worked with non-technical stakeholders and delivered features. I value clear, robust, reliable, and efficient code. I understand and am comfortable dealing with cryptography-related code. I am familiar with performance-sensitive networking and could be valuable in that context.

                    1. 4

                      C: 0.73 new features per year, measured by the number of bullet points in the C11 article on Wikipedia which summarizes the changes from C99, adjusted to account for the fact that C18 introduced no new features.

                      adjusted to account for the fact that C18 introduced no new features.

                      And that is why I love C. Yes, it has its problems (of which there are many), but it’s a much smaller, bounded set of problems. The devil I know. Many other languages are so large, I couldn’t even know all of the devils if I tried.

                      1. 25

                        The devil I know

                        if the devil you know is an omnipotent being of unlimited power that generations of warriors have tried to fight and never succeeded because it’s just too powerful, then I would argue that it might be worth trying to chose a different evil to fight.

                        Even in 2019 70% of security vulnerabilities are caused by memory-safety issues that would just not happen if the world wasn’t running on languages without memory-safety.

                        1. 1

                          I don’t think being memory safe is enough for a programming language to be a good C replacement.

                          1. 18

                            No. It’s not enough. But IMHO it’s required.

                            1. 4

                              … a requirement that C, incidentally, does not fulfil. Now that memory-safe low-level languages have swum into our ken, C is no longer a good C replacement. ;-)


                              Edited to add a wink. I meant this no more seriously than pub talk – though I believe it has a kernel of truth, I phrased it that way mainly because it was fun to phrase it that way. There are many good reasons to use C, and and I also appreciate those. (And acknowledge that the joke does not acknowledge them.)

                              1. 3

                                that is my point.

                                1. 2

                                  Hi, sorry, I spent a lot of time on my edit – everything below the line plus the smiley above it wasn’t there when you replied. Sorry to readers for making this look confusing.

                                  It is indeed your point, and I agree with it.

                            2. 6

                              Nobody is arguing that it’s sufficient, but it is necessary.

                              If I were to develop a new language today, a language that was as unsafe as C but had lots of shiny new features like ADTs and a nice package manager and stuff, I’d never get traction. It would be ridiculous.

                              1. 1

                                I don’t know. PHP and C are still pretty popular. You just target those markets with selective enhancements of a language that fits their style closely. ;)

                            3. 1

                              Doesn’t web assembly allow unchanged C to be memory safe?

                              1. 2

                                Sort of, but not really. Unmanaged C isn’t allowed to escape the sandbox it’s assigned but there is still plenty of opportunities for undefined behavior. Process-level isolation in OSes provide similar guarantees. In the context of WebAssembly, even if the TLS stack were segregated into its own module it would do nothing to mitigate a Heartbleed-style vulnerability.

                                1. 2

                                  There are other environments where the C standard is vague enough to allow for C to compile to a managed and safe environment. As the local AS/400 expert, C there compiles to managed bytecode, which is then compiled again by a trusted translator.

                                  1. 1

                                    I try to keep up with folks’ skills in case opportunities arise. Do you know both AS/400 and z/OS? Or just AS/400?

                                    Also interested in you elaborating on it making C safer.

                                    1. 3

                                      No, z is a separate thing I don’t know much about.

                                      Because C on AS/400 (or i, whatever IBM marketing calls it this week) is managed code, it does things like checking the validity of pointers to prevent things like buffer overflows. It does that by injecting hardware-enforced tagging. To prevent you from cheating it, the trusted translator is the only program allowed to generate native code. (AIX programs in the syscall emulator, however, can generate native code, but are then subject to normal Unix process boundaries and a kernel very paranoid about code running in a sandbox.) The tags are also used as capabilities to objects in the same address space, which it uses in place of a traditional filesystem.

                                      1. 1

                                        Thanks. That makes sense except for one thing: hardware-enforced tagging. I thought System/38’s hardware enforcement was taken out with things just type- or runtime-checked or something at firmware/software level. That’s at least how some folks were talking. Do you have any references that show what hardware checking the current systems use?

                                        1. 1

                                          No, tags and capabilities are still there, contrary to rumours otherwise.

                                          The tagging apparatus on modern systems are undocumented and as a result I know little about them, but it’s definitely in the CPU, from what I’ve heard.

                                          1. 1

                                            Ok. So, I guess I gotta press POWER CPU experts at some point to figure it out or just look at the ISA references. Least I know there wasn’t something obvious I overlooked.

                                            EDIT: Just downloaded and searched the POWER ISA 3 PDF for capabilities and pointer to see what showed up. Nothing about this. They’re either wrong or it’s undocumented as they told you. If it’s still there, that’s a solid reason for building critical services on top of IBM i’s even if commodity stuff had same reliability. Security be higher. Still gotta harden them, of course.

                              2. 1

                                Sort of. It had a much larger set of problems than safe, system languages that compete with it. There’s less to know with them unless you choose to dance with the devil in a specific module. Some were more productive with faster debugging, too. So, it seems like C programmers force themselves to know and do more unnecessarily at least on language level.

                                Now, pragmatically, the ecosystem is so large and mature that using or at least outputing C might make sense in a lot of projects.

                              1. 2

                                Neat! Any idea what’s happening for HTTP/3 & QUIC?

                                1. 2

                                  The RFC describes how WebSocket frames are to be encapsulated in HTTP/2 frames, but does not specify how HTTP/2 frames are represented or transmitted. So I assume any HTTP/2 implementation support WebSockets will continue to support WebSockets when adding HTTP/3 or QUIC support. I would wager that most major HTTP/3 implementations will support both WebSockets and QUIC.

                                  1. 1

                                    Sorry, I’ve no extra insight.

                                  1. 2

                                    On the HN version, syrusakbary chimed in to add they were working on something similar with a focus on maintainability. They also got Ngnix running. Reposting here in case any wasm fans find it interesting.

                                    1. 3

                                      Their claims seem to be exaggerated. They don’t have fork() or sigaction() working, it’s not a very useful server without them. Wasmjit is also focused on maintainability, I’m not sure what they mean by that.

                                      1. 1

                                        Wasmjit: it promised something similar to our goals, but after digging and trying to contribute to it we realized that all the architecture-specific instructions were hardcoded into the runtime itself. This meant a very steep process to implement WebAssembly instructions, and a considerable effort when reaching other architectures (basically the same effort as creating an IR engine like LLVM by hand). And sadly, the library didn’t have either a single test.

                                        Seems pretty clear-cut to me, though I can’t comment on its veracity.

                                        1. 2

                                          I was referring to their claims on “creating the first WebAssembly runtime able to run Nginx” which is exaggerated since they haven’t implemented a whole slew of critical functionality.

                                      2. 3

                                        They also posted something today about it and a quick comparaison to the other project including wasmjit: https://medium.com/@syrusakbary/running-nginx-with-webassembly-6353c02c08ac

                                      1. -4

                                        The upgrade from TCP to QUIC

                                        For a guy that says he knows protocols he certainly doesn’t know the OSI layers

                                        1. 10

                                          Are you sure about this? He specifically talks about moving off TCP to a layer 4+5 solution, UDP headers with QUIC inside.

                                          1. 0

                                            I’m very sure. He keeps conflating TCP with QUIC which are not at the same layers

                                            1. 13

                                              But they are. Ask yourself, what does a connection mean in networking context? Previously it was almost always a tcp connection as that’s what tcp does. Now it can be a non-tcp QUIC connection that does it’s own connection handling logic, multiplexing, in-order delivery, etc. That’s the whole point of QUIC-as-the-transport-layer thing at all.

                                              People suggested to split QUIC-the-transport layer from HTTP/2 and this is essentially what happened. It’s a transport layer level thing with built-in TLS that can handle arbitrary application protocols on top of it, not just HTTP.

                                              1. 5

                                                They are at the same layer. I suppose one could imagine a QUIC connection as having two transport protocols (UDP and QUIC) but I just think of it as one most of the time. The reason UDP is there is just because it wouldn’t work over the internet any other way, but you could run QUIC on top of IP if you wanted.

                                                1. 1

                                                  You certainly could, but it would never work on the real internet because of middleboxes that will only pass TCP and UDP. This is also what is stifling SCTP adoption.

                                                  The transport protocol is UDP not QUIC, so it would be good to end the ambiguity when discussing QUIC.

                                                  1. 2

                                                    There’s no reason why it couldn’t work one day even though it doesn’t work now. QUIC is a transport protocol. It provides all the features of a transport protocol. What do you call SCTP-over-UDP then? Just UDP?

                                                    1. 1

                                                      SCTP isn’t over UDP. I’m not aware of any implementation in the wild that attempts this. SCTP has its own implementation in OS kernels (Linux, FreeBSD) beside TCP and UDP. It’s not “over UDP”. But middlebox firewalls / shaping devices tend to drop any traffic that is not ICMP, TCP, UDP, or IPSEC which is why SCTP has never gained traction even though it is a superior protocol for many situations especially mobiles where seamless connection roaming between cellular and WiFi would be very much welcomed. Instead we have to live with “some services on iOS devices, for example, use MPTCP which is only supported by Apple services like Siri because very few servers on the internet have MPTCP support in their kernels”.

                                                      edit: I’m not an expert on SCTP, but I’ve certainly never heard of it being used over UDP. Would be curious to learn more if you’ve got a source.

                                                      edit2: correct acronym for Multipath TCP is MPTCP

                                                      1. 3

                                                        RFC 6951.

                                                        1. 0

                                                          Interesting. Is anyone actually using this in the wild or is it just a dead RFC?

                                                          1. 3

                                                            It’s implemented by the FreeBSD SCTP stack.

                                                            1. 0

                                                              Yeah, but is anyone actually using it? :) I know dteske was disappointed at all of the missing/broken dtrace hooks for SCTP in FreeBSD

                                                              1. 8

                                                                I don’t think that was the original argument. You claimed QUIC is not a transport protocol because it sits on top of UDP, but that’s just a consequence of how the internet works. I showed you how SCTP tried to work around the problems around NATs by doing exactly the same: transmitting packets over UDP.

                                                                1. 4

                                                                  Yes. WebRTC uses SCTP over UDP for its data streams. Google Hangouts, Facebook chat, and Discord all use WebRTC. So a non-trivial portion of internet traffic actually uses it. Further, in this usage it’s implemented with a user-mode library, just like QUIC currently is.

                                                                  1. 1

                                                                    Excellent, thanks for this info!

                                                          2. 1

                                                            And had Google said “your web site ranking will drop if we can’t reach your site via SCTP” then you can bet all those middle boxes would be patched, updated or replaced immediately!

                                                            1. 4

                                                              That only fixes web sites that care about their Google ranking. It doesn’t fix the middle boxes that sit in front of web browsers on corporate intranets and public wifi hotspots, because there’s no website to penalize. It also doesn’t do anything about the deep web sites that aren’t crawled by Google anyway, because you have to log into them.

                                                              I strongly suspect that most of the middle boxes in question are being deployed on those things.

                                                              1. 4

                                                                Those middle boxes are affecting the clients not the servers. Nobody’s going to upgrade their corporate SSL proxy for QUIC if the fallback to HTTP/1.1 is still working fine.

                                                1. 7

                                                  This is the wrong question to ask. WebAssembly can’t be slower or faster than Javascript; different implementations of WebAssembly or JavaScript runtimes can be slower or faster than each other. Javascript runtimes saw enormous improvements in performance over a period where the language saw relatively little change. It’s very likely that the relative performance of future runtimes could change a lot; WebAssembly is much newer than Javascript and as far as I know doesn’t even have any JITs, but it’s also possible that potential performance gains are smaller there.

                                                  It’s true that it there might be some fundamental challenges that make implementing a fast interpreter or compiler relatively more difficult. Garbage collection poses challenges for some workloads that want to be “fast” or at least pause-free, so WebAssembly is attractive to many developers for that reason. I’m sure there exists at least some optimization opportunities for higher-level javascript that don’t exist in WebAssembly. But none of these points–which are vastly more nuanced than “X is faster than Y”–are addressed here.

                                                  Further, there’s no mention of the exact toolchain used here. My understanding from dabbling in Rust targeting WebAssembly that changes in compiler versions, settings, and optimization tools can make enormous differences in both speed and compiled size; I know some people are reporting much better results from using the direct wasm32-unknown-unknown target instead of using emscripten, which this benchmark is likely to use.

                                                  1. 2

                                                    A secondary market for concert/event tickets could bypass the pretty extortionary and abusive middlemen that currently dominate that market. Of course that depends on: a stable value-coin; participation of event promoters; dramatically increased scalability, reliability, efficiency and usability of blockchain and off-chain tools.

                                                    I think anything like this, that depends on allocating scarce intangible assets, presents a compelling use case. But all this is contingent on dramatic improvements in technology. Right now there are some really interesting things going on in the Ethereum world involving consensus, sharding, and scalability that, if promises are lived up to, might get us closer to where we’d need to be. As far as I can tell most people involved in other cryptocurrencies space are unhealthily fixated on asset prices and speculation, probably to the detriment of making meaningful progress into building these things into useful tools.

                                                    1. 3

                                                      I kind of agree, but a “stable value-coin” is a “trusted third party”, I think.

                                                      1. 5

                                                        Yes. This is an absolutely unavoidable point that people need to accept and not try to find a clever technical solution for, because none exists! The US dollar is useful to people largely because the federal reserve has a staff that produces and considers numerous reports of real-world consumer prices and adjusts policy to keep them changing at a modest and predictable rate. There is no way a fixed set of rules in a blockchain can achieve this: we are intractable bound to these real-world institutions if we want the stable prices average citizens will demand. Cryptocurrency enthusiasts very slowly re-learning–or refusing to learn–these basic tenets of monetary policy is frustrating.

                                                        Given this, I feel that being able to trust arbitrary counterparties for a some variety of transactions could still be a very useful tool in the future.

                                                        1. 3

                                                          I think for at least the early ones it was less ignorance of monetary policy, and more a fundamental philosophical disagreement.

                                                      2. 1

                                                        Can’t they just sell it centralized online with a limit for each registered customer? Could even bootstrap it at one event by giving customers cards with unique codes on them for use with an app and/or web site. They get them with instructions when they come in. That establishes the unique ID’s that are used to buy tickets for future events.

                                                        Whatever is left of the problem should be minimal. If each supplier does this, third parties will show up selling them a solution trying to grab the market. It will get cheaper for those selling tickets.

                                                      1. 4

                                                        This is a very interesting project I’m excited to follow as it develops.

                                                        I’d love to know more, however, on how IPC is implemented such that the performance penalty doesn’t make this prohibitively slow versus monolithic kernels. I know that syscalls in Linux, for example, require context switches that are very expensive relative to a lot of operations. I’ve done some fruitful optimizations that involve simply minimizing the number of syscalls. But the design of Fuchsia implies a much higher number of inter-process context switches. By reading the design docs, it appears sending a message across a TCP connection over wifi could require messages to be sent over several isolated processes (netstack, ethernet driver, WLAN MLME driver, softmac driver). In Linux this would be a single syscall.

                                                        I won’t naively assume that syscalls are inherently expensive, and I assume Fuchsia’s syscalls are much cheaper than Linux’s. But is this quantified yet? If Fuchsia’s are much cheaper, what are the architectural and implement decisions that were made to take it true? How does Fuchsia differ from Linux in this respect?

                                                        1. 15

                                                          It appears that this release will run arbitrary JVM bytecode using the system JDK, if included in a Blu-Ray ISO: git.videolan.org As far as I can tell, it uses a SecurityManager to attempt to sandbox. Here’s a summary of the efficacy of this approach: https://tersesystems.com/blog/2015/12/29/sandbox-experiment/

                                                          I don’t see any information on even a cursory security audit of this component. Is this alarming to anyone else?

                                                          1. 5

                                                            Is this alarming to anyone else?

                                                            They put Java into some standard for Blu-Ray. A lot of places and things were using it thanks to the big, marketing push back in the day. Then, to use that or part of it you need to run Java. As usual with Java, using it is a security risk. This kind of thing happening in tech or standards designed by big companies for reasons other than security is so common that it doesn’t even alarm me anymore. I just assume some shit will happen if it involved codecs or interactive applications.

                                                            Old, best practice is to run the Internet apps and untrustworthy apps on dedicated box. Netbooks got pretty good. Substitute VM if trading for cost/convenience. Mandatory access control next with low assurance. You’re totally toast next.

                                                            1. 5

                                                              Like @nickpsecurity said, it is used in lot of places so not including it is same as raising middlefinger to your users (they cannot watch their expensive discs) and they go and use some other application, which probably is even less secured. We really cannot make ordinary users stop wanting to use their goods because now we know that they are insecure. Having secure system does not matter if no one uses it.

                                                            1. 5

                                                              Li Haoyi has a fantastic piece that’s simultaneously best explanation of how Scala’s de facto official build tool SBT works and also a great description of its fundamental problems: http://www.lihaoyi.com/post/SowhatswrongwithSBT.html

                                                              I share most of his frustrations, both about the fundamental design issues and the incidental issues the plague users. However, I think most build systems are pretty dreadful. The only one I’ve used that I’ve been really enthusiastic about is Nix, but it still has many incidental issues that should be addressed.

                                                              In general, I think any build tool that thinks of things in terms of a mutable directory full of files that need to be poked and prodded with tools in the correct order is not the way, fundamentally, we should be thinking about things. Thinking of things in terms of a chain of pure functions could make these things conceptually simpler, faster, and more reliable.

                                                                1. 3

                                                                  Quasar implements continuations by rewriting JVM bytecode. The goal here appears to be to implement similar functionality at a lower level and let existing JVM bytecode transparently access it. It’s not entirely clear from this proposal how different the implementation will be, however.

                                                                1. 39

                                                                  This is a misleading headline. Twitter has announced they are implementing a separate, explicitly experimental compiler from scratch. I don’t think it’s fair to characterize this as a fork.

                                                                  1. 7

                                                                    Even more: this compiler will not support all of Scala features (they don’t know yet which features will be dropped from support).

                                                                    1. 1

                                                                      Which is basically forking it, since it’s going to have it’s own set of features that are a subset of Scala.

                                                                  1. 9

                                                                    Idris looks really well designed, and I think these improvements are actually quite significant. Strictness by default is a game-changer for me; apparently the records and monads are more convenient to use (and there are effects, too? Not sure how experimental they are). If Idris was self-hosted, produced good static binaries with performance comparable to OCaml, and had a package manager I would definitely give it a serious try.

                                                                    1. 6

                                                                      Idris also has a quite buggy implementation at the moment, but like everything else you mentioned, it is a solvable problem. I think it’s a contender for a widely used industrial language in the future. Though at the moment it’s mainly used by people with pretty sophisticated FP knowledge, I think its dependent types and effect system may ultimately become something that’s easier for newcomers to understand than a lot of Haskell is.

                                                                      1. 7

                                                                        They are pretty unapologetic about 1.0 not being industry-grade, and it is not quite the goal of the language:

                                                                        Will version 1.0 be “Production Ready”?

                                                                        Idris remains primarily a research tool, with the goals of exploring the possibilities of software development with dependent types, and particularly aiming to make theorem proving and verification techniques accessible to software developers in general. We’re not an Apple or a Google or [insert large software company here] so we can’t commit to supporting the language in the long term, or make any guarantees about the quality of the implementation. There’s certainly still plenty of things we’d like to do to make it better.

                                                                        All that said, if you’re willing to get your hands dirty, or have any resources which you think can help us, please do get in touch!

                                                                        They do give guarantees for 1.0:

                                                                        Mostly, what we mean by calling a release “1.0” is that there are large parts of the language and libraries that we now consider stable, and we promise not to change them without also changing the version number appropriately. In particular, if you have a program which compiles with Idris 1.0 and its Prelude and Base libraries, you should also expect it to compile with any version 1.x. (Assuming you aren’t relying on the behaviour of a bug, at least :))

                                                                        Don’t get me wrong, I believe Idris is a great language precisely because of that: they want to be primarily a research language, but provide a solid base for research happening on top of their core. They have a small team and use those resources well for one aspect of the language usage. I would highly recommend having a look at it and working with it, this is just something to be aware of.

                                                                        from https://www.idris-lang.org/towards-version-1-0/

                                                                        1. 5

                                                                          Haskell is great because it’s where a lot of these ideas were tested and figured out. But it also has the cruft of legacy mistakes. Haskell can’t get rid of them now, but other languages can certainly learn from them.

                                                                      1. 2

                                                                        Example 2 will not compile with the options -Xlint -Xfatal-warnings, which I recommend everyone use. We have a quite large codebase, and it hasn’t been arduous to keep these settings on. Failed exhaustiveness checks are sadly only usually a warning, so I highly recommend people keep this setting turned on.

                                                                        Both wartremover [https://github.com/wartremover/wartremover] and scapegoat [https://github.com/sksamuel/scapegoat] can prevent example 1, by preventing Serializable (or Product or AnyRef) from being inferred. I understand that “use a third-party linter” isn’t the answer a lot of people want, but it’s quite easy to set up and integrate into an SBT workflow.

                                                                        As far as the other problems, they are all syntax-related, and I grant Scala has some annoying ambiguities there.

                                                                        1. 1

                                                                          So this is where I get confused. What the license seems to be saying is that if I engage Facebook in litigation over some kind of patent of theirs, the license that they are granting me (to use the software freely, etc.) is revoked. If that happens, then what? Would it then be technically illegal to use React? If so, under what law – copyright law? Is this really enforceable?

                                                                          What I’m really getting at is, is RMS being a hard-ass and this is actually pretty typical, or is Facebook being sly and hiding restrictions in their software that don’t need to be there?

                                                                          1. 3

                                                                            Yes, the license is enforced under copyright law. It’s the same mechanism the GNU license uses to enforce its restriction that you must distribute source code with your binary. If you don’t fulfill the requirements of the license, then you don’t have a license to use the code any more.

                                                                            What the “first generation” of open source licenses doesn’t deal with is patent rights. Which means you can end up with a license to copy the code but not a license to use the code. Various attempts have been made to address that, including this FB language.

                                                                            Whether and how all this enforceable is the eternal question of all open source licenses, and indeed all licenses. :)

                                                                            1. 2

                                                                              No. The patent grant is an additional grant of rights to React users, above and beyond the BSD license. The patent grant may be revoked if you sue (or countersue) Facebook, but the original BSD license cannot be revoked.

                                                                            1. 3

                                                                              IANAL, but consider if FB hadn’t created and distributed the patent grant file. My guess is we would probably be worse off. But creating it draws attention to something normally invisible, because I don’t think any software license automatically protects you from patent litigation.

                                                                              1. 20

                                                                                Also not a lawyer, but Apache 2.0 explicitly mentions patents. As I understand it, it says that you have an automatic grant to all relevant patents owned by all contributors, but if you claim one of your patents is infringed by people using the software, you lose your licence to the software.

                                                                                Compare to the React licence, which says you lose your licence to the software if you sue Facebook over any patent at all, regardless of whether it’s related to React or not.

                                                                                The Apache 2.0 licence is a well-regarded Free Software licence, but the React licence, it seems, is not.

                                                                                1. 5

                                                                                  I’ll see myself out. :)

                                                                                  1. 9

                                                                                    Please don’t remove comments even if they’re incorrect–it makes reading threads later a lot harder.

                                                                                    1. 4

                                                                                      OK

                                                                                      1. 3

                                                                                        But editing a comment to state that you retract it would seem valuable. (Elsewhere I’d propose striking it out, e.g. by enclosing the whole shebang in <s></s>; alas, no strikeouts on Lobsters.)

                                                                                    2. 4

                                                                                      This is incorrect. If you sue Facebook over any patent at all, it does not terminate your software license. It does terminate the additional patent grant. So the patent grant plus the BSD license gives you strictly more legal protection than the BSD license alone does. The Apache 2.0 license also revokes the patent grant, but not the entire license, if you initiate patent litigation against the copyright holder.

                                                                                      See the last question at https://code.facebook.com/pages/850928938376556

                                                                                      1. 1

                                                                                        Apache 2.0 covers you, but the point still stands for other libraries licensed under licenses such as BSD. I’d presume GPL(v2) also protects you against patents, but this is just an assumption. It would be nice to get confirmation for this from a source that is at least somewhat official (regarding US jurisdiction).

                                                                                        1. 1

                                                                                          GPLv2 does nothing for patents that MIT or BSD doesn’t. That was one of several reasons for GPLv3.

                                                                                    1. 3

                                                                                      A singleton holds a global static variable. The fact that it’s usually (entirely) private is its only slight slaving grace. Making it a little more public introduces issues of thread-safety and referential transparency that didn’t exist before, and wouldn’t exist at all using a saner pattern.

                                                                                      1. 22

                                                                                        The cost of adding a networked computer to something is now low and getting lower, but the cost of making the software the runs on it secure or reliable has stayed high. With engineer salaries having grown like they have, it may actually be getting more expensive. In the long term, businesses are going to wake up to liability and customer satisfaction concerns and stop selling insecure, unreliable “internet of things” devices. But I think we’re in for a few years of zero-days on refrigerators, big invasions of privacy, and maybe some injuries and deaths before this happens.

                                                                                        1. 17

                                                                                          There’s a reason they call it the Internet of Things Targets. We’ve already felt this with consumer routers.

                                                                                          1. 4

                                                                                            Interestingly enough, around here we have progressed to the point where you don’t buy your home router….. You get a “FREE ROUTER” with your fibre connection.

                                                                                            Actually, the reason it’s free, is if you watch carefully, every now and then it quietly updates itself and reboots….

                                                                                            ie. The ISP’s have worked out it’s cheaper to bundle a router they can control and update, than to handle the service complaints due to hacked routers.

                                                                                            Alas, what worries me more about this story is the implications of it when put together with Snowden’s information.

                                                                                            ie. The spooks can easily move one very large step beyond just listening….

                                                                                            1. 2

                                                                                              Another reason for that shift is that ISPs have started realizing it might be valuable in its own right to own & control a distributed network of access points. For example all newer Comcast routers are dual-SSID routers. One of the SSIDs is configurable by the customer as their usual home wifi network, and the other one is locked to SSID ‘xfinity’, serving as part of Comcast’s national wifi network.

                                                                                          2. 4

                                                                                            I’d like to see entertainment systems standardized and shared between car manufacturers. Why can’t I just get a double/triple/quad din entertainment drop in replacement at my local electronics shop and have it control exactly the same things the previous one did?

                                                                                            In my 1999 car I replaced the single din tape player with a 3rd party one, but had to give up volume buttons. It was worth it. In my 2003 car I replaced the double din stereo with a 3rd party one, but kept all functionality by getting Pioneer -> ISO -> ISO -> Holden.

                                                                                            Newer cars than that seem to have an all in one “iDrive” style system that controls entertainment and gps (Which is fine) but also air conditioning, electric seats, car internetting, performance mode/suspension, lap timing. I can do without some of those things, but not being able to control the air conditioning at a minimum is an absolute deal breaker. If you can live with the lose of the other things it is still going to cripple your resale value. Why do they have to tie everything in together? My friend has a Z4M. The stereo isn’t great, but there is no way he is going to throw out this sort of functionality for a better one.

                                                                                            I just want them to either use standards so a replacement 3rd party unit doesn’t downgrade functionality (I know car companies aren’t going to do this) or at least split up system so that I could just replace the “entertainment system” (Which would basically be the screen + stereo tuner) and the air conditioning could still be controlled through it because the “entertainment system” and the air conditioner talk to each other over a standard interface (USB/ethernet/wifi with a standard open source “car communications” protocol).

                                                                                            1. 4

                                                                                              Part of the problems with replacements (in the UK at least) is that they’re easy to steal. One of the large drops in the UK crime rate is because car stereos are now integrated and difficult / impossible to casually take.

                                                                                              A nice(?) side effect is that when considering which car to buy next, you’re more likely to go to the same manufacturer so you don’t have to re-learn a new system for changing radio stations.

                                                                                              1. 1

                                                                                                I always imagined it was because an average $100 3rd party stereo is fine for most people and will only resale for say $30, so it is only worth stealing a $1000+ 3rd party stereo. If you are stealing an original stereo it is only worth stealing it if is actually good, is usable in your car and you have/can crack the code that locks it to the car/ecu.

                                                                                            2. 3

                                                                                              Depending on how you look at it, a problem on top of this is that technologies keep on removing the ability to control which version of software they run. On my Android phone, if it decides to upgrade a piece of software and I say yes, I cannot downgrade it even if there is a huge security hole in it. I expect to see IoT being even worse about this.

                                                                                              One of the reasons I loved OS X so much was because it had a user friendly interface that was pretty good but I could dive below it and be a power user. The mobile platforms are not catering to this at all. The counter argument is that it is better because a centralized authority is making sure everyone is up to date. IMO, there is no reason to believe that is true.

                                                                                              1. 1

                                                                                                engineer salaries having grown like they have

                                                                                                Could you cite? I find maybe a 10% increase (relative to inflation) since 1985.

                                                                                                1. 1

                                                                                                  I hope not connecting stuff that shouldn’t be connected to the net will help in the meantime. Unless they carry their own gsm modules…