1. 27

    Sometimes I like to think that I know how computers work, and then I read something written by someone who actually does and I’m humbled most completely.

    1. 11

      A lot of this complexity seems down to the way Windows works, though. As a Linux user, the amount of somewhat confusing/crufty stuff going on in a typical Windows install boggles the mind; it’s almost as bad as Emacs.

      1. 11

        I guess to me it doesn’t feel like there’s much Windows specific complexity here, just a generally complex issue; a bug in v8’s sandboxed runtime and how it interacts with low-level OS-provided virtual memory protection and specific lock contention behavior, which only expressed itself by happenstance for the OP.

        Some of this stuff just feels like irreducible complexity, though my lack of familiarity with Windowsisms (function naming style, non-fair locks, etc.) probably doesn’t help there.

        1. 5

          How does CFG work with chrome on linux?

          1.  

            Do you mean CFI?

            CFG is MS’s Control Flow Guard, it’s a combination of compile-time instrumentation from MSVC and runtime integration with the OS. CFI on Linux (via clang/LLVM), in contrast, is entirely compile time AFAIK, with basically no runtime support.

            See:

            for more details on the differences.

            1.  

              Yes and no. :) The linux CFI implementation doesn’t include the jit protection feature in CFG that’s implicated in the bug, so I’m not sure it’s fair to characterize this as “cruft”.

              1.  

                The CFI implementation in llvm isn’t a “linux CFI implementation.” :)

                As OpenBSD moves towards llvm on all architectures, it can take advantage of CFI, just as HardenedBSD already does. :)

              2.  

                llvm’s implementation of CFI does have the beginnings of a runtime support library (libclang_rt.cfi). HardenedBSD is working on integrating Cross-DSO CFI from llvm, which is what uses the support library.

            2. 4

              Linux just hasit’s own weirdnesses in other places.

              That said, memory management seems to be a source of strange behaviour regardless of OS.

          1. 46

            It’s pretty subtle, but I think it’s important to notice that here Rust is being fundamentally held to a higher bar (which is fair, to be clear).

            The C vulnerabilities are basically all the case that on specific user input, they materialize.

            The Rust vulnerabilities are “if you use this API in this particular way” they materialize. (This is also the case for a lot of the Python RCEs, the only way to trigger them is to execute arbitrary Python code.)

            That’s a big advancement! Unless they’re in common patterns, the vulnerable code won’t actually be exploitable.

            I think this observation is supported by real world evidence. Consider Domato, which is a DOM fuzzer for browsers. When Project Zero folks ran it for 100,000,000 iterations on each major browser, it found vulnerabilities in all of them. So far, when it’s been run against rust, it finds a bunch of panics, but not vulnerabilities as far as I can tell: https://github.com/servo/servo/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+domato

            Rust should continue to aim higher: less need for unsafe, better abstractions for writing safe unsafe code, and clearer semantics for unsafe code. But I think it’s important to be clear that Rust has already significantly raised the bar.

            1. 10

              The security researcher also recommended we consider using GPG signing for Homebrew/homebrew-core. The Homebrew project leadership committee took a vote on this and it was rejected non-unanimously due to workflow concerns.

              This is incredibly sad and makes me wonder what part of the workflow would have been impacted. Git automatically signs the commits I make for me once I have entered my password once, thanks to gpg-agent.

              1. 3

                They have a bot which commits hashes for updated binary artifacts. If all commits needed to be signed, it’d need an active key, and now you have a GPG key on the Jenkins server, leaving you no better off.

                1. 2

                  But gpg cannot work with multiple smartcards at the same time, so maybe that’s a reason for some people. Either way there are simpler ways to deal with signing than gpg

                  1. 1

                    GPG signing wouldn’t have fixed this vulnerability as such, since presumably the same people not thinking about the visibility of the bot’s token would have equally failed to think about the visibility of the bot’s hypothetical private key

                  1. 3

                    Just finished Raven Rock, by Garrett Graff. It’s the history of the US’s continuity of government/continuity of operations/continuity of the presidency plans (primarily in the context of nuclear war), from Truman through the Obama Administration.

                    It’s a fascinating combination of politics, technology, and social issues. If you’ve ever found the “football” or the “gold card” to be a neat idea, you’ll like this book.

                    And the opening chapter is the perfect story to drag you in: someone doing radio ops/control tower work for Air Force one, on the day Nixon resigns, hearing that the plane is changing call site from Air Force One to USAF 27000 at 12:00:30pm, and only finding out what the deal was when he gets home and sees the news.

                    1. 1

                      There is no way in heck that linus will merge some DIY home rolled crypto code into the kernel

                      1. 11

                        It seems like you may not recognize the author. I would typically agree with you on first glance, but given who it is and what it is I wouldn’t be surprised if it got merged.

                        1. 8

                          That’s a good point but missing key detail. I’ll add author did WireGuard which has had good results in both formal verification and code review.

                        2. 7

                          Where else is kernel crypto code rolled?

                            1. 2

                              High praise from linus!

                            2. 2

                              Why not? How would Linus even know if some crypto code was DIY nonsense?

                              (The subtext of these commits from Jason is that the existing kernel crypto APIs are not particularly good, IMO.)

                            1. 3

                              Now we’re just waiting on Safari…

                              I’m extremely frustrated Apple is not being more proactive in adding support for WebAuth/U2F. Phishing is a serious problem, and U2F is a solution to it that works (as opposed to phishing tests and telling employees not to click on links, but of which demonstrably do not work). iOS’s lack of support for NFC for WebAuthn is a massive hindrance to adoption (BLE is a significantly worse UX).

                              1. 4

                                I personally find this API fairly frustrating. This call can have three different semantics, depending on the values and context in which you call it. This contributes to complexity.

                                I notice that in the userspace diff, the “lock unveil” functionality is never used, even in cases where unveil is added to the pledge string. As far as I understand it, this means that if an attack obtained code execution, they’d simply be able to undo the unevil with unveil("/", "rwx"). That’s unintuitive and likely to be a regular source of programming errors.

                                Grabbing some comments I made on IRC last night on how I’d persue this API:

                                22:10:09 <Alex_Gaynor> If I was doing this API, I'd probably do `sandbox_context *sandbox_context_create(void)` and then a bunch of `sandbox_context_add_X(sandbox_context *, ...)` with appropriate signatures, and then a `sandbox_context_apply(sandbox_context *)` and basically a default `sandbox_context` had no permissions, and then you can add back whatever you want, and calling `sandbox_apply` a second time on a process killed the process or something
                                22:10:34 <Alex_Gaynor> (Or maybe was allowed, as long as the permissions were a strict subset of what was already applied)
                                22:12:33 <Alex_Gaynor> Oh, and they should add a platform-specific `posix_spawn_...` thing to take a `sandbox_context` so that it's applied right at `exec`, before any user code runs.
                                22:13:23 <Alex_Gaynor> Basically the two properties I've found useful in sandboxing are: a) It should be extremely easy to see what capabilities your process has, you want them all in one place, and defaulted to "nothing" so basically the permissions are what you have written down, b) It should be extremely easy to draw a perimeter around what your process already does, and slowly wittle it down by basically deleteing "adds".
                                
                                1. 2

                                  I notice that in the userspace diff, the “lock unveil” functionality is never used, even in cases where unveil is added to the pledge string.

                                  I only saw two or three diffs where it isn’t clear if the unveil pledge is later revoked. All others that add unveil to pledge also have a pledge without unveil soon after the unveil calls. It’s possible the two or three cases also have it but it’s just not visible in the diff.

                                  So in these programs, that attack does not work unless you get your RCE during the initialization phase.

                                  Even if unveil was never locked, it can still protect against all-too-common path traversal style bugs (especially in web crapps) that leak data without RCE.

                                1. 2

                                  So, I only took a skim, but I started with the advice for strings: https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152038

                                  The advice here basically just says “don’t have a bug”. Don’t pass non-nul terminated strings to functions that want a nul-terminated string!

                                  A coding standard should give you entire practices not to use to avoid bugs. For example, never use nul-terminated strings, as they are to error prone! Simply giving you example bug-types and telling you not to write them isn’t particularly useful.

                                  (Obligatorily: a huge number of the bug-classes described here are basically unique to C/memory-unsafe programming languages. Telling people “please stop writing bugs” is a losing strategy, even with all the static and dynamic analysis in the world. We can’t produce bugs en masse and then de-bug our way out of it, we need to adopt programming languages that prevent the bugs in the first place.)

                                  1. 1

                                    Humm is this wise, the urban legend has always been asan has big security holes.

                                    http://seclists.org/oss-sec/2016/q1/363

                                    1. 4

                                      The major specific vulnerability described here seems to be specific to suid binaries, which Firefox is not.

                                    1. 8

                                      Certificate appears to have expired ~3 months ago.

                                      If anyone is involved with mruby, or knows folks who are, perhaps you could let them know.

                                      1. 1

                                        I happen to know the person behind mruby.sh. Done, thanks :). (Person reacted, will probably be fixed soonish)

                                      1. 9

                                        Many of the author’s experiences speaking with senior government match my own.

                                        However, there’s one element that I think is very easily lost in this conversation, and which I want to highlight: there is no group I spend more time trying to convince of the importance of security than other software engineers.

                                        Software engineers are the only group of people I’ve ever had push back when I say we desperately need to move to memory safe programming languages. All manner of non-engineers, when I’ve explained the damages wrought by C/C++, and how nearly every mass-vulnerability they know about has a shared root cause, generally understand why this is an important problem, and want to discuss ideas about how do we resolve this.

                                        Engineers complain to me that rewriting things is hard, and besides if you’re disciplined in writing C and use sanitizers and fuzzers you’ll be ok. Rust isn’t ergonomic enough, and we’ve got a really good hiring pipeline for C++ engineers.

                                        If we want to build software safety into everything we do, we need to get engineers on board, because they’re the obstacle.

                                        1. 11

                                          People don’t even use sanitizers and fuzzers, so I’m not sure why you would expect them to rewrite in Rust. It’s literally 1000x less effort.

                                          As far as I can tell, CloudFlare’s CloudBleed bug would have been found if they compiled with ASAN and fed about 100 HTML pages into it. You don’t even have to install anything; it’s built right into your compiler! (both gcc and Clang)

                                          I also don’t agree that “nearly every mass vulnerability has a shared root cause”. For example, you could have written ShellShock in Rust, Python, or any other language. It’s basically a “self shell-code injection” and has very little to do with memory safety (despite a number of people being confused by this.)

                                          The core problem is the sheer complexity and number of lines of unaudited code, and the fact that core software like bash has exactly one maintainer. There are actually too many people trying to learn Rust and too few people maintaining software that everybody actually uses.

                                          In some sense, Rust can make things worse, because it leads to more source code. We already have memory-safe languages: Python, Ruby, JavaScript, Java, C#, Erlang, Clojure, OCaml, etc.

                                          Software engineers should definitely spend more time on security, and need to be educated more. But the jump to Rust is a non-sequitur. Rust is great for kernels where the above languages don’t work, and where C and C++ are too unsafe. But kernels are only a part of the software landscape, and they don’t contain the majority of security bugs.

                                          I would guess that most data breaches these days have nothing to do with memory safety, and have more to do with bugs similar to the ones in the OWASP top 10 (e.g. XSS, etc.)

                                          https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf


                                          Edit: as another example, Mirai has nothing to do with memory safety:

                                          https://en.wikipedia.org/wiki/Mirai_(malware)

                                          All it does it try default passwords, which gives you some idea of where the “bar” is. Rewriting software in Rust has nothing to do with that, and will actually hurt because it takes effort and mindshare away from solutions with a better cost/benefit ratio. And don’t get me wrong, I think Rust has its uses. I just see people overstating them quite frequently, with the “why don’t more people get Rust?” type of attitude.

                                          1. 2

                                            There were languages like Opa that tried to address what happened on web app side. They got ignored just like people ignore safety in C. Apathy is the greatest enemy of security. It’s another reason we’re pushing the memory-safe, higher-level languages, though, with libraries for stuff likely to be security-critical. The apathetic programmers do less damage on average that way. Things that were code injections become denial of service. That’s an improvement.

                                          2. 2

                                            not only software engineers, almost the entire IT industry has buried it’s head in the sand and is trying desperately hard to hide from the problem, because “security is too hard”. We are pulling teeth to get people to even do the minimal upgrades to things. I recently had a software vendor refusing to support anything other than TLS 1.0. After many exchanges back and forth, including an article from Microsoft(and basically every other sane person) saying they were dropping all support of older TLS protocols because of their insecurity, they finally said, OK we will look into it. I’m sure we all have stories like this.

                                            If you can’t even bother to take the minimum of steps to upgrade your security stacks after more than a decade,(TLS1.0 released in 1999 and TLS 1.2 is almost exactly a decade old now) because it’s “too hard”, trying to get people to move off of memory unsafe languages like C/C++ is a non-starter.

                                            But I agree with you, and the author.

                                            1. 2

                                              I would like to use TLS 1.3 for an existing product. It’s in C and Lua. The current system is network driven using select() (or poll() or epoll() depending upon the platform). The trouble I’m having is finding a library that is easy, or even a bit complicated but sane to use. The evented nature means I an notified when data comes in, and I want to feed this to the TLS library instead of having the TLS library manage the sockets for me. But the documentation is dense, the tutorials only cover blocking calls, and that’s when they’re readable! Couple this with the whole “don’t you even #$@#$# think of implementing crypto” that is screamed from the roof tops and no wonder software engineers steer away from this crap.

                                              I want a crypto library that just handles the crypto stuff. Don’t do the network, I already have a framework for that. I just need a way to feed data into it, and get data out of it, and tell me if the certificate is good or not. That’s all I’m looking for.

                                              1. 2

                                                OpenBSD’s libtls.

                                                1. 2

                                                  TLS 1.3 is not quite ready for production use, unless you are an early adopter like Cloudfare. Easy to use API’s that are well-reviewed are not there yet.

                                                  Crypto Libraries: OpenBSD’s libtls like @kristapsdz mentioned, or libsodium/nacl or OpenSSL. If it’s just for your internal connections and don’t actually need TLS, just talking to libsodium or NaCL for an encrypted stream of bytes is probably your best bet, using XSalsa20+Poly1305. See: https://latacora.singles/2018/04/03/cryptographic-right-answers.html

                                                  TLS is a complicated protocol(TLS1.3 reduces a LOT of complexity, it’s still very complicated).

                                                  If you are deploying to Apple, Microsoft or OpenBSD platforms, you should just tie to the OS provided services, that provide TLS. Let them handle all of that for you(including the socket). Apple and MS platforms have high-level API’s that will do all the security crap for you. OpenBSD has libtls.

                                                  On other platforms(Linux, etc), you should probably just use OpenSSL. Yes it’s a fairly gross API, but it’s pretty well-maintained nowadays(5 years ago, it would not qualify as well maintained.). The other option is libsodium/NaCL.

                                                  1. 1

                                                    Okay, fine. Are there any crypto libraries that are easy to use for whatever is current today? My problem is: a company that is providing us information today via DNS has been invaded by a bunch of hipster developers [1] who drunk the REST Kool-Aid™ so I need a way to make an HTTPS call in an event driven architecture and not blow our Super Scary SLAs with the Monopolistic Phone Company (which would case the all-important money to flow the other way), so your advice to let OS provided TLS services control the socket is a non-starter.

                                                    And for the record, the stuff I write is deployed to Solaris. For reasons that exceed my pay grade.

                                                    So I read the Cryptographic Right Answers you linked to and … okay. That didn’t help me in the slightest.

                                                    The program I’m working on is in C, and not written by me (so it’s in “maintenance mode”). It works, and rewriting it from scratch is probably also a non-starter.

                                                    Are you getting a sense of the uphill battle this is?

                                                    [1] Forgive my snarky demeanor. I am not happy about this.

                                                    Edit: further clarification on what I have to work with.

                                                    1. 1

                                                      I get it, it sucks sometimes. I’m guessing you are not currently doing any TLS at all? So you can’t just upgrade the libraries you are currently using for TLS, whatever they are.

                                                      In my vendor example, the vendor already implemented TLS (1.0) and then promptly stopped. They have never bothered to upgrade to newer versions of TLS. I don’t know the details of their implementation, obviously, since it’s closed-source; but unless they went crazy and wrote their own crypto code, upgrading their crypto libraries is probably all that’s required. I’m not saying it’s necessarily easy to do that, but this is something everyone should do at least once every decade, just to keep the code from rotting a terrible death anyways. TLS 1.2 becomes a decade old standard next month.

                                                      I don’t work on Solaris platforms (and haven’t in at least a decade, so you are probably better off checking with other Solaris people). Oracle might have a TLS library these days, I have no clue. I tend to avoid Oracle land whenever possible. I’m sorry you have to play in their sandbox.

                                                      I agree the Crypto right-answers page isn’t useful for you, since you just want TLS, It’s target is for developers who need more than TLS. I used it here mostly as proof of why I recommended XSalsa20+Poly1305 for symmetric encryption. Again, you know you need TLS, so it’s a non-useful document for you at this point.

                                                      Event driven IO is possible with OpenSSL, but it’s not super easy see: https://www.openssl.org/docs/faq.html#PROG11. Then again, nothing around event driven IO is super easy. Haproxy and Nginx both manage to do it, and are both open-source implementations of TLS, so you have working code you can go examine. Plus it might give you access to developers who have done event driven IO with TLS. I haven’t ever written that implementation, so I can’t help with those specifics.

                                                      OpenSSL is working on making their API’s easier to use, but it’s a long, slow haul, but it’s definitely a known problem, and they are working on it.

                                                      As for letting the OS do the work for you, you are correct there are definitely use-cases where it won’t work, and it seems you fit the bill. For most applications, letting the OS do it for you is generally the best answer, especially around Crypto which can be hard to get right, and of course only applies to the platforms that offer such things(Apple, MS, etc). Which is why I started there ;)

                                                      Anyways, good luck! Sorry I can’t just point to a nice easy example, for you. Maybe someone else around here can.

                                                      1. 1

                                                        I’m not even using TCP! This is all driven with UDP. TCP complicates things but is manageable. Adding a crap API between TCP and my application? Yeah, I can see why no one is lining up to secure their code.

                                                        1. 1

                                                          I think there is a communication issue here.

                                                          The vendor you are connecting with over HTTPS supports UDP packets on a REST API interface? really? Crazier things have happened I guess.

                                                          I think what you are saying is you are doing DNS over UDP for now, but are being forced into HTTPS over TCP?

                                                          DNS over UDP is very far away from a HTTPS rest API.

                                                          Anyways, for being an HTTPS client, against a HTTPS REST API over TCP, you have 2 decent options:

                                                          Event driven/async: use libevent, example code: https://github.com/libevent/libevent/blob/master/sample/https-client.c

                                                          But most people will be boring, and use something like libcurl (https://curl.haxx.se/docs/features.html) and do blocking I/O. If they have enough network load, they will setup a pool of workers.

                                                          1. 2

                                                            Right now, we’re looking up NAPTR records over DNS (RFC-3401 to RFC-3404). The summary is that one can query name information for a given phone number (so 561-555-5678 is ACME Corp.). The vendor wants to switch to a REST API and return JSON. Normally I would roll my eyes at this but the context I’m working in is more realtime—as in Alice is calling Bob and we need to look up the information as the call is being placed! WE have a hard deadline with the Monopolistic Phone Company to provide this information [1].

                                                            We don’t use libevent but I’ll look at the code anyway and try to make heads and tails.

                                                            [1] Why are we querying a vendor this for? Well, it used to be in house, but now “we lease this back from the company we sold it to - that way it comes under the monthly current budget and not the capital account.” (at least, that’s my rational for it).

                                                            1. 2

                                                              Tell me how it goes. Fwiw, you might want to take a quick look at mbed TLS. Sure it wants to wrap a socket fd in its own context and use read/write on it, but you can still poll that fd and then just call the relevant mbedtls function when you have data coming in. It does also support non-blocking operation.

                                                              https://tls.mbed.org/api/net__sockets_8h.html#a2ee4acdc24ef78c9acf5068a423b8c30 https://tls.mbed.org/api/net__sockets_8h.html#a03af351ec420bbeb5e91357abcfb3663

                                                              https://tls.mbed.org/api/structmbedtls__net__context.html

                                                              https://tls.mbed.org/kb/how-to/mbedtls-tutorial (non-blocking io not covered in the tutorial but it doesn’t change things much)

                                                              I’ve no experience with UDP (yet – soon I should), but if you’re doing that, well, mbedtls should handle DTLS too: https://tls.mbed.org/kb/how-to/dtls-tutorial (There’s even a note relevant to event based i/o)

                                                              We use mbedtls at work in a heavily event based system with libev. Sorry, no war stories yet, I only got the job a few weeks ago.

                                                              1. 1

                                                                Right, let’s add MORE latency for a real-time-ish system. Always a great idea! :)

                                              1. 2

                                                My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all - as well as placing you at the mercy of a third party provider of which you are a mere customer. Don’t pay your bill because of hard times or, worse yet, have an adversary inside the provider or government that has influence over the priced and all bets are off - your password is going to get reset or account ‘recovered’ and there isn’t much you can do.

                                                For these reasons, the best 2FA, IMO, is a combination of “something you have” - a crypto key - and “something you know” - the password to that key. Then you can backup your own encrypted key, without being at the mercy of third parties.

                                                Of course, if you loose the key or forget the password then all bets are off - but that’s much more acceptable to me than alternative.

                                                (FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)

                                                1. 4

                                                  (FYI - I don’t use Github and I’m not familiar with their 2FA scheme, but commenting generally that most 2FA is done poorly and sometimes it’s better not to use it at all, depending on how it’s implemented.)

                                                  GitHub has a very extensive 2FA implementation and prefers Google Authenticator or similar apps as a second factor.

                                                  https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/

                                                  1. 2

                                                    I don’t use Google’s search engine or any of their products nor do I have a Google account, and I don’t use social media - I have no Facebook or Twitter or MySpace or similar (that includes GitHub because I consider it social networking). Lobste.rs is about as far into ‘social networking’ as I go. Sadly, it appears that the GitHub 2FA requires using Google or a Google product - quite unfortunate.

                                                    1. 9

                                                      You can use any app implementing the appropriate TOTP mechanisms. Authenticator is just an example.

                                                      https://help.github.com/articles/configuring-two-factor-authentication-via-a-totp-mobile-app/

                                                      1. 5

                                                        Google Authenticator does not require a Google account, nor does it connect with one in any way so far as I am aware.

                                                        Github also offers U2F (Security Key) support, which provides the highest level of protection, including against phishing.

                                                        1. 3

                                                          This is very good to know - thank you for educating me. I only wish every service gave these sort of options.

                                                        2. 1

                                                          You can also use a U2F/FIDO dongle as a second factor (with Chrome or Firefox, or the safari extension if you use macOS). Yubikey is an example, but GitHub has also released and open sourced a software U2F app

                                                      2. 0

                                                        My issue with most implementations of 2FA is that they rely on phones and MMS/SMS which is beyond terrible and is often less secure than no-2FA at all

                                                        A second factor is never less secure than one factor. Please stop spreading lies and FUD. The insecurity of MMS/SMS is only a concern if you are being targeted by someone with the resources required to physically locate you and bring equipment to spy on you and intercept your messages or socially engineer your cellular provider to transfer your service to their phone/SIM card.

                                                        2FA with SMS is plenty secure to stop script kiddies or anyone with compromised passwords from accessing your account.

                                                        1. 1

                                                          I happen to disagree completely. This is not lies nor FUD. This is simple reality.

                                                          The when the second factor is something that is easily recreated by a third party it does not enhance security. Since many common “two-factor” methods allow resetting of a password with only SMS/MMS and a password, the issue should be quite apparent.

                                                          If you either do not believe or simply choose to ignore this risk, you do so at your own peril - but to accuse me of lying or spreading FUD only shows your shortsightedness here, especially with all of the recent exploits which have occurred in the wild.

                                                          1. 1

                                                            Give me an example of such a vulnerable service with SMS 2FA. I will create an account and enable 2FA. I will give my username and password and one year to compromise my account. If you succeed I will pay you $100USD.

                                                            1. 1

                                                              We both know $100 doesn’t even come close to covering the necessary expenses or risks of such an attack - $10,000 or $100,000 is a much different story - and it’s happened over and over and over.

                                                              For example, see:

                                                              Just because I’m not immediately able to exploit your account does not mean that it’s wise to throw best-practices to the wind.

                                                              This is like deprecating MD5 or moving away from 512-bit keys - while you might not be able to immediately crack such a key or find a collision, there were warnings in place for years which were ignored - until the attacks become trivial, and then it’s a scramble to replace vulnerable practices and replace exploitable systems.

                                                              I’m not sure what there is to gain in trying to downplay the risk and advising against best practices. Be part of the solution, not the problem.

                                                              Edit: Your challenge is similar to: “I use remote access to my home computer extensively - I’ll switch to using Telnet for a month and pay you $100 when you’ve compromised my account.”

                                                              Even if you can’t that doesn’t justify promoting insecure authentication and communication methods. Instead of arguing about the adaquecy of SMS 2FA long after it’s been exposed as weak, we should instead be pushing for secure solutions (as GitHub already has and was mentioned in the threads above).

                                                              I also wanted to apologize for the condescending attitude in my precious response to you.

                                                              1. 1

                                                                So you’re admitting that SMS 2FA is perfectly fine for the average person unless they’ve been specifically targeted by someone who has a lot of money and resources.

                                                                Got it.

                                                                1. 1

                                                                  DES, MD5, and unencrypted Telnet connections are perfectly fine for the average person too - until they are targeted by someone with modest resources or motivation.

                                                                  So, yes, I admit that. It still is no excuse to refuse best practices and use insecure tech because it’s “usually fine”.

                                                                  1. 1

                                                                    Please study up on Threat Models. Grandma has a different Threat Model than Edward Snowden. Sure, Grandma should be using a very secure password with a hardware token for 2FA, but that is not a user friendly or accessible technology for Grandma. Her bank account is significantly more secure with SMS 2FA than nothing.

                                                                    1. 1

                                                                      That actually depends on how much money is in Grandma’s bank account. And if SMS can be used for a password reset, I’d highly recommend grandma avoid it - it simply is not safer than using a strong unique password. With the prevalence of password managers, this is now trivial.

                                                                      While I don’t have any grandma’s left, I still have a mother in her 80’s, and, bless her heart, she uses 2FA with her bank - which is integrated into the banking application itself that runs on the tablet I bought her - it does not rely on SMS. At the onset of her forgetful old age she started using the open-source “pwsafe” program to generate and manage her passwords. She also understands phishing and similar risks better than most of the kids these days simply because she’s been using technology for many years. She grew up with it and knows more of the basics, because schools seem to no longer teach the basics outside of a computer science curriculum.

                                                                      These days, being born in the 1930s or 1940s means that you would have entered college right at the first big tech boom and the introduction of widescale computing - I find that many “grandma/grandpa” types actually have a better understanding of technology and it’s risks than than millennials.

                                                                      I do understand Theat Models, but this argument falls apart when it’s actually easier to use the strong unique passwords than the weaker ones - and the archetype of the technology oblivious senior, clinging to their fountain pens and their wall mounted rotary phones is, as of about ten years ago, a thing of the past.

                                                                      1. 1

                                                                        More on SMS 2FA posts:

                                                                        https://pages.nist.gov/800-63-3/sp800-63b.html#pstnOOB

                                                                        https://www.schneier.com/blog/archives/2016/08/nist_is_no_long.html

                                                                        NIST is no longer recommending two-factor authentication systems that use SMS, because of their many insecurities. In the latest draft of its Digital Authentication Guideline, there’s the line: [Out of band verification] using SMS is deprecated, and will no longer be allowed in future releases of this guidance.

                                                                        Since NIST has come out strongly against using SMS 2FA years ago it should be fairly straightforward to cease any recommendations for it’s use at this point.

                                                      1. 3

                                                        It jumps out to me that you had to include an estimate of your connection speed in the configuration. What’s the behavior if Comcast either gives you a free performance boost, or runs in a degraded state with lower performance?

                                                        1. 3

                                                          There’s not much you can do about that. You could always monitor it with a cron job and update the params/rules with a script I suppose.

                                                          1. 3

                                                            Right, but what happens? How does the system behave in those circumstances?

                                                            1. 7

                                                              If the actual bandwidth is more than what’s specified, it won’t hurt but you will be artificially limiting yourself. If it is less than what’s specified, then it probably won’t work very well because even though the buffers on your router will remain empty (LAN<->WAN both at 1Gbps), the next device in sequence will start buffering and that’s outside of your control at this point. In other words, you want the router that is doing QoS to be the bottleneck.

                                                              On OpenBSD, if you don’t specify the bandwidth param, then it will default to whatever rate the NICs are running at (10/100/1000Mbps for example).

                                                              1. 3

                                                                Thanks!

                                                        1. 2

                                                          And then we rewrote all this code in Python and were much happier.

                                                          (Not as happy as I’ll be when all the C is rewritten in Rust, but as long as we all port a little code every day, we’ll be there before too long!)

                                                          1. 1

                                                            exarkun used to give me some great advice on IRC back when I was neck deep in twister/pyopenssl/mem_bio/session resumption. pyOpenSSL changed to use cffi or something right?

                                                            1. 2

                                                              Yes, nowadays pyOpenSSL uses the cffi bindings to OpenSSL via the cryptography package.

                                                          1. 1

                                                            However, I still think there is value in fuzzing compilers. Personally I find it very interesting that the same technique on rustc, the Rust compiler, only found 8 bugs in a couple of weeks of fuzzing, and not > a single one of them was an actual segfault. I think it does say something about the nature of the ode base, code quality, and the relative dangers of different programming languages, in case it was not clear already. In addition, compilers (and compiler writers) should have these fuzz testing techniques available to them, because it clearly finds bugs. Some of these bugs also point to underlying weaknesses or to general cases where something really could go wrong in a real program. In all, knowing about the bugs, even if they are relatively unimportant, will not hurt us.

                                                            This is a really interesting point - this kind of fuzzing gives us a test for whether the sorts of more advanced static verification that programming languages like Rust offer are actually paying off in terms of program reliability. If rustc, written in Rust, gets a “better score” when fuzzed than gcc, written in C (do they use C++?) does, that’s evidence that the work the Rust language designers put into the borrow checker and the type system and so forth was worthwhile. We can imagine similar fuzz testing for large programs in other programming languages.

                                                            1. 1

                                                              that’s evidence that the work the Rust language designers put into the borrow checker and the type system and so forth was worthwhile

                                                              Not really - gcc and rustc are far from equivalent programs.

                                                              1. 2

                                                                It’d be interesting to know whether LLVM was also compiled with AFL’s instrumentation. Obviously any findings from GCC’s optimizers would be “expected” to be found in LLVM, not rustc.

                                                                1. 2

                                                                  Maybe instead compare this compiler with just the parts of rustc it was based on. That version, too. From there, there’s a difference between team size, amount of time to do reviews, and possibly talent. Those could create a big difference in bugs. However, the bugs that should always be prevented by its static types should still count given the language should prevent them.

                                                                  So, I’d like to see rustc vs mrustc in a fuzzing comparison.

                                                              1. 17

                                                                WebUSB is a mistake.

                                                                And with WASM, we will have even less chance of catching malware that can leverage it.

                                                                1. 12

                                                                  Do not fear, people will implement WASM time-sharing systems, so you can not only execute random people’s code on your machine, you can also run a WASM anti-virus solution alongside!

                                                                  1. 6

                                                                    What’s the connection to WASM?

                                                                    1. 3

                                                                      The dangers of exposing APIs like web USB are compounded with performant and inscrutable blobs run in the browser. Thus, WASM exacerbates these issues.

                                                                      1. 13

                                                                        Is WASM more inscrutable than obfuscated JS?

                                                                        My experience that we suffer far more from the fact that we have no idea when a payload is delivered, since a web server can serve distinct content to every viewer, than we do from the fact that some payloads are difficult to untangle.

                                                                        1. 3

                                                                          I’ve seen arguments like this before but never fully understood them. It seems to me like asm.js is just as inscrutable as WASM, but it’s more annoying to work with for a couple reasons:

                                                                          • It’s fast, but somewhat inconsistently so as compared to WASM
                                                                          • Large download size

                                                                          Not to mention all of the minifiers and manglers that exist for conventional JS. Why the WASM hate? It seems more useful to programmers than the alternatives, and we’re already paying the security cost of running untrusted executable code from the internet in browsers today.

                                                                          1. 2

                                                                            asm.js is similarly gross, but people appear to be moving to its successor WASM.

                                                                            Reversing minified and mangled JS is, I submit, a different level of inconvenient from reversing bytecode–especially bytecode that can suddenly leverage other language ecosystems obfuscation tools and technique. Just because they’re different levels of inconvenient doesn’t make one more acceptable than the other.

                                                                            As for the security cost–look, a lot of attacks and nastiness open themselves up once you can leverage that improved performance. Spectre/Meltdown were directly enabled by better performance primitive for timing and shared array buffers, and yet some people refuse to acknowledge the problems they pose by their very existence.

                                                                            I’ve griped about this all before, and at this point I’m basically resigned to the idea that fanboys and nerds more excited about performance and shiny and their chance to leave their teeny mark on the web ecosystem than about user security and rights and conservative engineering are probably going to win on this in the end.

                                                                            :(

                                                                            1. 4

                                                                              I get the woes of security on the web — it’s really, really hard to make running untrusted code secure, especially with the “dancing pigs” problem. My point with asm.js, though, was that WASM doesn’t add anything new: before WASM, people were compiling to a fast subset of JavaScript, and that was equally difficult to decompile. And that really puts the problem squarely back in “running untrusted code securely is hard” camp: if you were a browser vendor, what would you do? Any language will have fast paths (and as a vendor you’re also incentivized to make those paths very fast), and if you enforce running only a single language, people can always compile to the set of operations that are fast in that language. WASM is an improvement over the ad-hoc version, at least.

                                                                              But yeah, definitely get that security on the web is hard :(

                                                                      2. 4

                                                                        I can see your fear but it might be unfounded. WASM doesn’t have access to all the Web Platforms API, that is not how it works. The WASM “ISA” is specified, it doesn’t have access to stuff outside it, you might be curious to check the specs at https://webassembly.github.io/spec/

                                                                        Since the WASM file formats (both the bytecode one and the text one, which is based on S-expressions) are easy to parse, it is not too far-fetched to have static analysers checking the code out.

                                                                        WASM doesn’t have access to file system or sockets or even the DOM among other limitations. It is basically a faster way to number crunch and/or port existing code written in other languages. All those side-effecty things need to be proxied over through JS and the Web Platform that will ask permissions and sandbox a ton of it.

                                                                        In my humble option, I am much more confortable executing JS/WASM things on the client-side than trusting arbitraty SaaS backends with my data. I know what the Web Platform has access to and what I allow it to peer with.

                                                                        I find WebUSB a really nice step forward as it allows WebAuth to provide stronger authentication schemes, which are always a good idea.

                                                                        1. 2

                                                                          Thanks for the link. I was wanting to learn more about it. The intro is really good, too. Many desirable properties. I bet it was hard to design trying to balance all of that. Usually, that also means a formal specification might uncover some interesting issues.

                                                                      1. 2

                                                                        Well done writeup, with enough detail to be interesting but not tedious. I’m looking forward to Part 2.

                                                                        My impression is that their approach is largely inspired by Google’s ClusterFuzz. Projects that meet Google’s criteria can use their “OSS-Fuzz” infrastructure, but hooray for DIY if you have the resources.

                                                                        1. 1

                                                                          I’m looking forward to the next part as well. I think this is a little bit different than clusterfuzz/ossfuzz since it is using a more cusotmized (grammar based) fuzzer, as opposed to a generic fuzzer like libfuzzer.

                                                                          I’m mostly interested to see if this is really a security issue since the sanitizer reported a segmentation fault rather than a buffer overflow. Regardless, it is still a bug that they found so it is a worthy cause.

                                                                          I’m also not a javascript expert but I’m wondering why the author was confused to find promise objects within the array: doesn’t mapping an async function over an array create an array of promises?

                                                                          1. 1

                                                                            If you read the first part of this series, this was the vulnerability they used in their pwn2own exploit.

                                                                        1. 2

                                                                          I’m very excited to hear about their coverage-guided JS fuzzer – I’m not aware of any really big successes in coverage-guided fuzzing for complex languages (in contrast to other file formats, both binary and textual, where coverage guided fuzzing has been brutally effective), so I imagine there’ll be some great insights there.

                                                                          1. 4

                                                                            Appears to download app updates over plaintext, unauthenticated, HTTP, and it does some ad-hoc HTTP parsing, no usage of a library.

                                                                            1. 3

                                                                              Nice list of ideas for project contributions. ;)

                                                                            1. 1

                                                                              Are there any docs for the design principles that will be used to guide how the APIs look and function?

                                                                              1. 1

                                                                                Some of the background is that the target consumer is ssh, so I think the initial version of the API will look a lot like what’s convenient for ssh.

                                                                                1. 1

                                                                                  That is useful context, thanks.