1. 2
    1. This isn’t a proof at all, it’s an argument. Just jump to the end and assumptions abound, without even a coherent proof structure tying them together. I’m not going to spend a lot of time digging into them, but many of them look quite suspect (like their “typical” spending model).

    2. It doesn’t really matter that much what topology lightning ends up with since it’s trustless anyway. I would have a mild preference on a highly “random” graph, but I don’t think it really matters in the effective absence of counterparty risk.

    1. 3

      For a more compelling argument regarding LN, which takes on the linked post, see

      http://www.coppolacomment.com/2018/01/probability-for-geeks.html

      1. 2

        For what it’s worth, I gave a talk with conclusions very similar to this post at Papers we Love on the Interledger protocol (which also includes a mention of Lightning):

        https://www.youtube.com/watch?v=FDIGRKQu3rA

        There is a very important distinction between what Paul Baran called “decentralized” and what he called “distributed”: the former are hub-and-spoke systems, which fundamentally scale with the capacity of the hubs. “Distributed” systems, which Baran suggests optimally have at least 3 links to other nodes, can scale unboundedly as they become both faster and more resilient as more nodes join the network.

        My talk steps through Paul Baran’s graphs comparing Lightning to a “decentralized” hub-and-spoke network, and concluding with Interledger as a truly distributed alternative. That’s not even to say Lightning and Interledger are competing on the same playing field: Interledger could be potentially used to interconnect different implementations of Lightning operating on different blockchains.

    1. 24

      So, I saw two, well-known professionals in security falsely claiming cache-based timing channels were discovered in 2005. They were actually discovered in early-to-mid 1990’s along with most of x86’s problems following TCSEC methods one of them calls useless, red tape. Summarizing some of my recent comments, I laid out the history of what was done, when, by who, with what mitigations, and how actually reading prior work would’ve found new attacks sooner if not preventing them outright. The biggest reason readers see me constantly dropping CompSci submissions is a disturbing trend (esp in INFOSEC) of ignoring prior work to only rediscover what was in it the hard way after it does a lot of damage. They usually follow by patting themselves on the back for their “discoveries.” It seems to be a cultural thing motivated by social signalling in groups since most will ignore the work after a popular member claims it has no value. Eliminating this in favor of reading and building on prior work in INFOSEC or programming would’ve drastically accelerated development of good solutions.

      There’s a related note to this: security certifications. Anyone reading HN, Lobsters, or other forums with submissions about this will see security professionals often recommend against it. They often talk like it’s never happened, focus on some failed regulations, or speak speculatively. The work I cite in my submission was prior work by professionals following the TCSEC’s requirements. The systems those methods produced were highly secure when analyzed or pentested spotting threats such as cache-based, timing channels a decade in advance of popular folks and conferences. The methods clearly worked at improving security of commercial products. So, if you see the topic again, remind them of that so they start with a more honest and informed position: regulation of computer security worked before with great results but had (specific problems here) we need to fix if we do it again. That would be true.

      1. 8

        I can’t imagine how frustrating the past week must have been for you to watch, thanks for sharing all of these.

        1. 9

          Yeah, good guess. I paused to chill out before checking responses. The irritating part as I told Colin Percival just now is that we’ve been telling everyone from security folks to VMM builders about this specific work for over twenty years. I personally bring it up on every VMM thread I can if it’s security-focused since most of the problems and techniques still are true. The rest can be improved on. It’s extremely irritating just how systematically projects or people claiming to be focused on improving security avoid prior work in CompSci or high-security field that did exactly that. We see the old problems they ignore reappear after the tech is used by millions of people or critical functions where the damage will be high. It will happen again before 2019.

        2. 1

          I’m confused: is the purpose of this thread to stroke your own ego and claim you are smarter than Tom Ptacek and Colin Percival?

          …ignoring prior work to only rediscover what was in it the hard way after it does a lot of damage. They usually follow by patting themselves on the back for their “discoveries.

          By saying things like this you are downplaying one of the most impressively sophisticated attacks in the history of information security.

        1. 2

          The title of this thread belongs in a tabloid.

          We have absolutely known there are cache-timing sidechannels for an awfully long time. Here, I wrote a blog post about them in 2014:

          https://tonyarcieri.com/cream-the-scary-ssl-attack-youve-probably-never-heard-of

          But… there is simply no comparison. This attack is pretty much hands down the most sophisticated attack I’ve ever seen in my life, and I’ve seen people break into TrustZone with a single null byte overflow.

          The complexity and sophistication of this attack greatly outclasses any previous cache timing sidechannel. Period. End of story.

          1. 3

            “I’m confused: is the purpose of this thread to stroke your own ego and claim you are smarter than Tom Ptacek and Colin Percival?”

            You’d be better off asking if the purpose of Thomas’s dismissals of work that spotted and mitigated problems a decade or two ahead of him was about ego or some value to society in ignoring such work. He consistently dismisses any work I bring up in high-assurance security, about the TCSEC that produced highly-secure systems, and so on. Newcomers reading comments of highly-regarded people in security saying that TCSEC or A1-class techniques were useless or “just red tape” would (do) think those security professionals assessed that prior work, saw nothing secure came of it, and are now recommending against it. The truth is those “experts” have usually not read on that prior work, are misrepresenting it (i.e. slandering good researchers), and/or their dismissals or misrepresentations contribute to prior problems re-appearing down the road with all the damaging that brings.

            Those same people are often hypocritically saying, like Thomas in recent thread on email encryption, their security advice is about prioritizing avoiding damage to innocent parties. Yet, they’re willing to cause it by suppressing known-good techniques for… ego or social standing? That does piss me off when it happens in any field, esp INFOSEC. I do call it out. Instead of pure flamewars, I try to do so with clear arguments citing hard evidence what I say is true like landmark works talking about cache channels in 1990’s. Interesting enough, knocking out his and other people’s bullshit that way got me my early karma on Hacker News. People kept thanking me in email saying they’d do it but downvote mobs hit them after every dissenting comment. Happened to me, too, sometimes in seconds but usually reversed later in the day. So, I stayed on it (still do) for various myths or fads on these forums needing peer review.

            Far as Colin, I told him I respected him, considered his an independent rediscovery of the problem, thanked him for his FOSS work, and was clear I was knocking out misinformation (2005 claim) about when we could’ve spotted cache-based issues. Colin probably just didn’t read or see the early work since the community he entered collectively ignores that work. They’re not ignoring it for scientific reasons given the prior work was the strongest INFOSEC ever produced with methods that consistently outperformed ad hoc stuff mainstream, security community pushes. After seeing that evidence, only reason they’d collectively ignore or suppress it without qualifiers is social: politics, dominant egos, tribal signaling… something other than making us secure. I told Colin people like him having access to that information earlier might help them solve problems earlier with even more effective designs than if they don’t have access to good, prior work. It makes good researchers like him even better than they already are.

            So, I keep reinforcing the good, calling out the bad, and generally posting every piece of obscure research I can to folks that might benefit from it to facilitate serendipitous discovery across social or knowledge silos. I’m still working on better solutions to that problem but there’s group politics behind most that vary by group. Work in progress…

            “But… there is simply no comparison. This attack is pretty much hands down the most sophisticated attack I’ve ever seen in my life, and I’ve seen people break into TrustZone with a single null byte overflow. The complexity and sophistication of this attack greatly outclasses any previous cache timing sidechannel. Period. End of story.”

            I think you’re focusing too much on the developing attacks part of my comment (or these events) than the part about how prior work would help spot attacks and defend against them. The people who I told cpercival about that identified the side channels came up with methods that year to find them in models of software and hardware. There was even tooling for doing it with formal specs, like eg Gypsy Verification Environment, on software side that might have been extended for hardware: software method modeled state transistions; hardware developers often used state models, too. The researchers were saying in 1992 that about every component leaked in those systems with a need to develop leak-resistant versions of them (aka MLS-capable versions in their jargon). They said this was necessary so their VMM project running untrusted workloads side-by-side with secret ones wouldn’t allow secrets to leak to malicious apps using any of those components. Any of that sound familiar?

            So, while high-security and CompSci celebrated those works, mainstream security ignored all that. Many even mocked how “impractical” they were for daring find and fix root causes if there was a performance hit or you’d have to buy a non-Intel chip. Then, they rediscovered timing channels in caches talking big like you are now about how smart the attacks or discoveries were that nobody or few had thought about. All the security forums were talking about it. Most still ignored prior work and tech we were posting on same problems to mitigate it at every level in hardware. Mainstream folks were making mitigations that are very tactically-focused on each individual instance of leaks instead of root causes at whole-system level. As they did that, some in CompSci started building on that prior work for hardware and software mixes.

            Since mainstream researches piecemeal instead of whole-system, more of the same stuff is found in other components that a basic, covert-channel analysis would’ve found quickly. They still refuse to learn from or use those techniques as seen by fact that most people following DEFCON, etc often have never heard of them. People pushing old methods are pariahs of sorts. And now, another big problem is found that an info-flow analysis on a hardware model might have found (esp model-checking) or prevented by default simply from B3/A1/EAL6/7 designs avoiding constructions they can’t exhaustively analyze (eg AAMP7G versus ARM micros). That last part was and still is a rule in high-assurance development that keeps paying off where we assume it will screw up until we can rigorously show it can’t in all situations with good a model as we can use. Security professionals often argued with me about that, too, with many justifications for not analyzing or containing stuff with high-assurance methods.

            In summary, you’re amazed by the fact that people avoiding proven techniques for analyzing hardware interactions for information-flow leaks later discovered in performance-focused, info-sharing-oriented CPU’s that…

            1. A shared component known to be a source of leaks without leak mitigation…

            2. One or more other components with little to no leak analysis that also have no mitigations…

            3. Somehow interact to create a leak someone didn’t see coming. Mitigations might even be costly since 1 and 2 weren’t designed to do this from beginning against recommendations from the 1980’s for high-assurance design of any kind. “Can’t retrofit strong security” we say.

            Simplified like I did with Ted’s example, a known-insecure component mixing with maybe-insecure components had insecure result against user expectations it would be secure. You see why I’m not surprised even if I don’t deny the attack itself is clever? I mean, who saw problems like that coming past half a dozen people publishing in 1990-1995 saying that we need covert-channel mitigations in the CPU, memory, I/O, kernels, networking, filesystems, and multi-user apps!? Mainstream security’s focus on highlighting clever attacks instead of root-cause defense is much like those who insist on using C instead of memory-safe, system languages in apps that didn’t need it marveling at a dozen, clever ways their systems are broken by people manipulating known-bad components into new constructions that break security. We might have not anticipated the specifics of the attacks but they leverages known-unsafe primitives instead of known good (or worth trying) techniques. Developers ignored those for “performance,” “popularity,” “we hate FFI’s,” etc. Leaks in hardware/software are like C exploits with most builders telling us it’s not worth their time to look for or prevent them for decades. Then, they make exceptions for each individual attack that becomes massively popular or damaging like recent ones. Then, they fix that attack, earn creds on blog posts or conferences, and go back to whatever they were doing ignoring root causes or whole systems. In rare, rare instances we do see something like Rust get popular addressing big, root causes. Most not addressed, though.

            And I bet most of them are still not pulling prior work on spotting all leakage in hardware even now that I brought it up. They sure are talking a ton about how clever the new attack is, though, with them taking turns showing how much better each of them understand the recent reports, what they might do about just that attack ignoring analyses of whole chip, how many systems will fall, how much money will be lost, and so on. There’s you more comments motivated by ego rather than security since most aren’t contributing info to stop the next leaks.

            Those of us in high-assurance security are motivated to make secure, correct-by-construction systems with ego coming in when we do it right, esp avoiding problems. It’s so hard to prove a negative against smart attackers (even temporarily) it’s worth being proud of. That’s it. We’re obviously not trying to win popularity contests if we were droning on about covert channels since the 1980’s and CPU channels since 1990’s to a crowd whose majority dismissed us every year for it, made exceptions for some on occasion (eg 2005), and then continued dismissing. If I wanted popularity or ego, ‘d be talking… whatever’s popular in DEFCON, Black Hat, or (for money) cryptocurrencies.

            There is a minority that does listen. Even on recent posts, the votes indicate a lot of appreciation for sharing the prior work. I also put together the link above with some of the recent work on finding leaks in hardware. I also tell people regularly about simple, FOSS processors they can build and analyze themselves if Intel/AMD won’t make something to their standards. I enjoy providing people what they need to solve problems, once and for all wherever I can. It helps some of them avoid harm. I’m definitely proud and happy with doing that. I plan to do it better this year, though, since my prior style definitely needs improvement. Focus will be high-security marketing.

          1. 4

            I don’t fully understand the threat model. Presumably unprivileged processes cannot read another users processes arbitrarily, and if they can, then isn’t it just making the attack slightly harder? Do kernels not wipe physical memory when reallocating it to other processes?

            What does this defend against?

            1. 11

              I think people take it a bit too far, but the various threats include:

              1. Do a crypto op. Do an insecure op. An exploit finds the key leftover from previous crypto op.

              2. A variation of sorts of the above, the memory gets reused, leaks, oops.

              3. There’s some flavor of kernel bug that leaks memory, and you’d like to narrow the window of vulnerability.

              4. Suspend, hibernate, cold boot, etc.

              I think the threat model is basically adversary gets a snapshot of memory at some point, so you’d like it to be uninteresting.

              1. 6

                Also: process crashes, core dump gets written with secrets inside of it, adversary gets access to disk.

                1. 2

                  Suspend sounds like a good use case. I might have to patch some go code I have.

                  1. 2

                    Maybe add “code ran in VM that moved” to 4 depending on whether whatever manages them overwrites memory before dropping a new one. This could become a bigger risk for platforms that use lightweight VM’s/containers with rapid launch and shutdown.

                  2. 2

                    It is, indeed, a lot of complexity for a threat model which most don’t need to worry about: naive RAM scrapers which they’d like to trip with the guard pages. A sophisticated attacker with remote code execution can easily sidestep such a defense, because the secrets are still sitting unencrypted in RAM.

                    A better approach would be to actually encrypt the sensitive information. This is particularly useful for cryptographic keys, because we can e.g. use a key-encrypting-key sitting in XMM registers to decrypt an encrypted data-encrypting-key (DEK) into other xmm registers. That is to say, we can keep an encrypted copy of the DEK in memory, decrypt it into xmm registers when we want to use it, perform cryptographic operations with it, and then we never have to worry about the unencrypted version sitting around in memory in the first place.

                    Or, if you’re really worried, use something like Intel SGX, or perform encryption using a separate physical device like a TPM, HSM, or Yubikey

                    1. 2

                      I agree that encrypting secrets is the next logical step. I’ve been planning a scheme for a while now and it should hopefully land in the next major update, time permitting.

                  1. 4

                    Note this article is from August, and I really haven’t seen anything come of this, although I may have missed something.

                    1. 2

                      Yeah, I haven’t heard anything new since we published this. Also linked in that article is the 2016 Black hat talk which (at least for me) was a really good SEP primer:

                      https://youtu.be/7UNeUT_sRos

                    1. 1

                      This is a particularly bad design for several reasons:

                      1. It allows voters to obtain their own vote. Several people have already covered this, but among other things this means voters can prove how they voted, and therefore sell their votes. This is unacceptable.
                      2. It allows the holder of the static key (i.e. the government) to see how you voted, which means you have lost the property of a “secret ballot”. At least as described, users are “sealing” their identities under a single X25519 static key, then publishing the results to a public “blockchain”. This lacks forward secrecy and is therefore something of a terrifying single point of failure: if ever this static key were to be compromised, an attacker could decrypt everyone’s identities and see how they voted.
                      3. It still doesn’t allow voters to verify their votes were actually counted: Voters are effectively sealing a set of weak credentials under an ephemeral key and a static key they don’t hold. Lacking access to the ephemeral key, they can’t decrypt the message “E” and see if it actually maps back to their identity. A malicious system could show several users who voted the same way the same “E” value and they have no way to prove the vote is actually theirs

                      That’s why verifiable voting systems need to be built on zero knowledge proofs and/or (partially) homomorphic encryption. Ignoring the practicalities of actually deploying a system like this, I think what you actually want looks a little more like this:

                      1. Use Identity-Based Encryption to link a set of weak credentials/“voter ID” to a set of keys. This prevents the voting machine from generating duplicate receipts for people who voted the same way: your identity and the fact you voted is strongly linked to your credentials, not just to an ephemeral key
                      2. Use zero knowledge proofs to enable voters to check their vote is included in the corpus, without being able to see what it actually was
                      3. Use partially homomorphic encryption to encrypt individual votes and combine them together indivisibly as they’re added to a larger and larger corpus, but still allow totals to be calculated from an input corpus

                      I think much of this can be accomplished with pairings-based cryptography and algebraic circuits, at least until large quantum computers are built.

                      1. 20

                        This post contains the same 3 complaints you’ll routinely see about Signal: use of phone numbers to identify contacts, use of Google Cloud Messaging, and lack of federation. Granted these are all valid complaints, but are essentially true of any of the popular “secure” messengers which support asynchronous messaging between participants who may-or-may-not be online (Emphasis on popular here: there are niche messengers who solve one or more of the afforementioned problems, often to the detriment of user experience)

                        It then goes on to… prescribe nothing? Instead we get this:

                        The big question now, as also said by @shiromarieke on Twitter, is what post-Signal tool we want to use. I don’t know the answer to that question yet, but I will lay out my minimum requirements of such a piece of software here.

                        The rest of the piece takes on a “I want a pony” air of wishful thinking to it. The oddly ironic part of it is this sort of longing for better encrypted messaging is exactly what people were doing in the decades prior to Signal, despite him describing his train of thought as “post-Signal”. Those who forget the past are doomed to repeat it, I guess.

                        Unfortunately just wishing for something doesn’t make it true, and in the meantime Signal is real and a best-in-class tool for secure messaging use cases.

                        Since the author didn’t, I’ll go ahead and give a shout out to Matrix here:

                        https://matrix.org/

                        I think it provides much of what the author wants. But I probably wouldn’t recommend it over Signal, yet: it’s a work-in-progress and doesn’t necessarily cover all of the Signal use cases yet.

                        All that said: I found this to be a fairly substance-free post: tl;dr: Signal isn’t perfect? Cry me a river. Perfect is the enemy of good

                        1. 9

                          Yeah, I mean, this is somebody who trains journalists in secure comms? So what does that training consist of now? “I recommend you not be a journalist.”?

                          1. 1

                            I don’t understand this comment. You say that the post contains three valid complaints, but he shouldn’t complain, because it’s still better than some other tool. With WhatsApp using Signal’s encryption now, I can’t see much difference between the too: You are still in a walled garden, with your metadata going through google. I don’t get why you ridicule these concerns.

                            I second the matrix recommendation, though. It’s everything Signal promised to be.

                            1. 1

                              I didn’t write this comment, but I think I understand it:

                              Signal is probably the best we have right now in “production quality” state, so it’s the best option we have, but we could do better. In that case it might be better to support the “not yet production quality” stuff.

                              Or super short and offensively said: Shut up and hack! (as the OpenBSD people would say)

                              Mainly, because most people are aware of it anyway, but that doesn’t make it better and not recommending the best available option seems kind of a strange move - at least to me.

                              Like I said, my understanding. I don’t know for sure, if that’s what bascule actually meant. But maybe hearing it through someone else’s words helps understanding. :)

                          1. 1

                            Can someone elaborate when and why it would make sense to handle this logic within the JSON document? The article references “Its primary intended use is in cryptographic authentication contexts”. It’s not clear to me why you would need an alternative to JSON for this use case.

                            1. 1

                              It’s needed to cleanly disambiguate binary data from unicode strings in a content-aware hash setting, provided you want the same digests to be computed for (T)JSON data and e.g. protos. This is particularly important if your document contains fields like cryptographic hashes or public keys. Otherwise, to verify the protos you’d have to round trip all the binary fields to Base64(url) to compute their hashes.

                            1. 1

                              Neat! Inline tagging seems like it might be useful in some contexts.

                              I wonder if you could express Hexadecimal, Base32, etc. with JSON Schema instead (example), though, which also has support for required fields and conditional dependencies, and is implemented in several languages.

                              1. 1

                                The goal of TJSON is to be a self-describing, schema-free format.

                                1. 1

                                  I don’t get why the type should be self describing, looks like that introduce a lot of clutter

                              1. 1

                                Note that “The HTTP Gem” (a.k.a. http.rb) also natively implements HTTP in Ruby, using the http_parser.rb gem’s native code parsers to parse the response:

                                https://github.com/httprb/http

                                1. 9

                                  Problem: Grunt
                                  Solution: Gulp
                                  Problem: Gulp

                                  Haha.

                                  It was pretty funny, although it omits (voluntarely, I hope) some solutions (like “Generated code is hard to debug” -> “Source maps”).

                                  If you think about it we completely created some of the “problems” ourselves: Why would Javascript need to run outside the browser? On mobile, even? And what about videogames?

                                  1. 6

                                    Why would Javascript need to run outside the browser?

                                    Indeed. I can think of a couple problems solved, but I can also think of a few alternative solutions.

                                    1. 4

                                      There’s a lot to be said for the desire to have a single programming language that can be utilized everywhere, and that solves most problems in the same elegant way. I think it’s an admirable goal, which, I think, JavaScript has shown isn’t completely crazy.

                                      I wish it were the case that a (in my opinion) better language was showing this though, instead of one designed and implemented in about 2 weeks, which we still pretty much have to remain compatible with, or did, at least as of a few years ago. (my frontend skills have really atrophied over the years).

                                      1. 8

                                        There’s a lot to be said for the desire to have a single programming language that can be utilized everywhere, and that solves most problems in the same elegant way.

                                        But this is a fool’s errand. It’s not a new fool’s errand, either; it’s the same damn nonsense being repeated over and over again. My thing about Javascript is less about the (terrible) language than about the culture of ahistorical, “this time it’s different!” Silicon Valley utopian bafflegab that oozes down from the useful idiots in finance to the young kids who are taught that they have no need to learn anything, still less any history, as the fundamental imperative now is “move fast and break stuff.”

                                        1. 2

                                          There isn’t even a single spoken language used universally, and we have a few thousand years head start on those! I can’t imagine a single programming language accomplishing that sufficiently to satisfy everyone.

                                        2. 4

                                          I’ve been doing a bit of Clojurescript / Clojure lately, and that is a vastly different (and imo vastly superior) language. With some of the new stuff coming, including a CLJS compiler that can run in Node.js without a JVM, I think you could see Clojure/script make a bid to be the language you’re describing.

                                          1. 3

                                            Problem: Having to deal with multiple languages to develop a web application

                                            Solution: Transpile a good language to JavaScript so you don’t have to deal with JavaScript in the browser!

                                            Problem: That doesn’t actually make the JavaScript go away

                                            Solution: WebAssembly!

                                            Problem: We’re still in a browser

                                            Yeah I could do this all day…

                                      1. 14

                                        I know the post is satire but I can’t help put kick my pet peeve.

                                        No, you still need virtualization, because containers don’t provide a full security story just yet. So if you want to run anything in a multi-tenant environment, you need to make sure you can’t escape the sandbox.

                                        The unfortunate thing about this is jails on FreeBSD and Zones on Solaris/Illumos have provided a more secure environment for doing containers than Linux. It’s unfortunate that Linux is winning the public mindshare on this because it has mostly been playing catch-up. SmartOS (Illumos derivative) actually runs the virtualisation solution inside a zone because the zone has better security. With LX-Branded Zones you can run a Linux executable in a container that is running on Illumos, without the cost of virtualisation.

                                        1. 3

                                          Would you mind going into the difference between jails, zones, and chroot or whatever under Linux?

                                          I was at a meetup last night talking about Docker, and I’m not really that convinced.

                                          1. 3

                                            Security was a design principle of Solaris Zones. In Docker it’s an afterthought.

                                            1. 1

                                              Since Docker uses the Linux kernel’s containerization primitives, security issues are probably not with Docker, but with the kernel, no?

                                            2. 1

                                              I think the point about Docker is portable instances rather than being an ideal jail/containerization.

                                              I don’t know how to box a FreeBSD jail up into a portable image that can be redeployed at will. Do you? If you think you could just hack that up, you’ve now forced all the problems Docker/Rocket solve.

                                              1. 8

                                                If you use ezjail on FreeBSD, it is actually pretty easy to do ezjail-admin archive, which spits out a tarball that can be copied to another server where you would run ezjail-admin restore or ezjail-admin create -a archive. All the automatic service registration would indeed need to be “hacked up” though.

                                                1. 3

                                                  That’s pretty cool :)

                                                  Thank you for sharing this!

                                          1. 4

                                            Note that chacha20+poly1305 is slower than AES-NI + CLMUL-accelerated GCM on Intel chips…

                                            Don’t get me wrong, I love djb, but I’m not sure this is the wisest default, especially if you find yourself scping large files around frequently.

                                            1. 1

                                              IME chacha20+poly1305 is the fastest of the default 6.8 ciphers on non-AES (i.e old, or embedded) hardware. Of course, choosing a different cipher on the command line is trivial, and adding more to the server config for internal use (arcfour128) is also pretty easy.

                                              1. 1

                                                AES-NI and CLMULM have been in Intel chips since ~2010 (Westmere) and AMD chips since ~2011 (Bulldozer). For server software, at least, they’re nearly ubiquitous.

                                                1. 3

                                                  Unless your servers are virtualized and don’t have AES-NI exposed, which until recently was all of AWS.

                                                  1. 1

                                                    Well, using older chips is normal, but you can’t use older AWS; it’s not available. At the moment, all of AWS supports AES-NI.

                                              1. 2

                                                Examples about cryptography where symmetric crypto is unauthenticated and RSA is used without specifying a padding mode do not inspire confidence.

                                                Things I’d like to know:

                                                1) How do I use AES-GCM? 2) How do I use RSA with OAEP?

                                                1. 7

                                                  Some good ideas here. Personally, I don’t really understand why you would want objects and actors. The documentation drops off a cliff at some point so looks like it’s not quite done.

                                                  1. 3

                                                    It doesn’t really argue explicitly for the combination, but one of their papers, “Deny capabilities for safe, fast actors” uses objects as the unit onto which a capability model is attached, in order to allow mutability while maintaining safety from data races.

                                                    1. 1

                                                      Interesting. I don’t have time to read the paper now but given that an object is just a tuple I find it suspicious that one would need objects for that.

                                                      1. 1

                                                        The word “object” in capability-based systems isn’t necessarily 1:1 with OOP. They’re often event loops, and can also be described as “services” or “actors”. In the case of Pony it’s using the term “actor” for objects with asynchronous behaviors.

                                                        I wouldn’t get too caught up on it.

                                                    2. 1

                                                      The documentation drops off a cliff at some point

                                                      Specifically, the tutorial is blank after the subsection Expressions – Sugar. The sections of the tutorial with content are Getting started, Types, Capabilities, and most of Expressions. The most important missing sections are Actors, Garbage collection and C-FFI.

                                                      1. 1

                                                        Dunno the justification for objects; but how do you lockless cooperative multiprocessing that scales seamlessly to many, many cores without actors?

                                                        This is the first time I’ve heard of it, but it seems really interesting. I’ve only just read the readme, mind. I have a couple years experience with Akka actors (for Scala / Java) and it had been great. Finding actor code a lot easier to reason about (and test!) than lock based code.

                                                        1. 2

                                                          I can’t speak for @apy but I read the comment as meaning they don’t understand the need for the combination of the two, as in objects && actors.

                                                          1. 2

                                                            @djm is correct, I’m not saying why objects or actors, but why both of them.

                                                        1. 0

                                                          All of the pains of Go’s shared mutable state concurrency model in a memory unsafe language? Sounds great!

                                                          1. 4

                                                            There’s no pain once you understand that you should use channels instead of access to shared data. Like they put it in their docs, “don’t communicate by sharing memory; share memory by communicating”: https://golang.org/doc/codewalk/sharemem/

                                                            1. 4

                                                              Go still lets you unsafely share mutable state. What you’re describing are merely best practices, the “Doctor, it hurts when I X!” “So don’t X!” approach to concurrency. But without real guarantees from the language, it’s possible you or a library you’re using may slip a pointer into a message somewhere.

                                                              Go added a data race detector for this reason, but data races can only (sometimes) detect races when they happen, i.e. in production, and often under unusual circumstances like high load (i.e. exactly when you don’t want data races to happen)

                                                              1. 1

                                                                Enforceable by value semantics right up until you put an array/slice or map in your struct! D'oh.