1. 6

    The word secure is somewhat meaningless without enough context. Also, HTTPS doesn’t immediately translate to secure and adding “not secure” to the url bar doesn’t achieve much either. AFAIR chrome still mistreats the “target = _blank” property…

    1. 15

      This is a common argument that I never understood the utility of. HTTPS is table stakes of online security, as there’s no security to be had if anyone on the network path can modify the origin contents.

      There’s plenty of actual research and roadmaps on indicators like Not Secure, and the eventual goal is indeed to mark the insecure option Not Secure instead of marking HTTPS as Secure. The web is a complex slow moving beast, but this is exactly a step in that direction!

      Anyway, if there’s one thing experience showed us is that trying to convey “context” on the security status of a TLS connection to users is a losing proposition.

      1. 4

        There’s plenty of actual research and roadmaps on indicators like Not Secure, and the eventual goal is indeed to mark the insecure option Not Secure instead of marking HTTPS as Secure. The web is a complex slow moving beast, but this is exactly a step in that direction!

        Not that I don’t believe you, but mind pointing me at this research?

        Anyway, if there’s one thing experience showed us is that trying to convey “context” on the security status of a TLS connection to users is a losing proposition.

        This is exactly my concern, it seems that sprinkling “security” hints to non-technical users usually leads to them making the wrong assumptions.

        1. 1

          I am focusing on a specific point in your post

          there’s no security to be had if anyone on the network path can modify the origin contents.

          This can be addressed by adding signatures rather than encrypting the whole page. There are useful applications such as page caching especially in low bandwidth situations which are defeated by encryption everywhere.

      1. 3

        I don’t think the MitM vector is clearly described on the article (or the blogpost it links to for that matter). Anyone care to elaborate why is this MitM-able?

        1. 2

          From reading the article this is better described not as MitM but as reducing the security of a popular workflow back to the level equivalent to software wallets. Although I could probably find a way to explain why it is in some sense MitM.

          The idea of hardware wallet is partially that the limitated protocols it uses make it very hard to attack; so the abilities of a worm which runs under your user account on your desktop to manipulate your payments is removed, unless it finds a vulnerability in a narrow-scope software.

          In this case, one of the workflows includes doing something in Javascript on the desktop side, while the verification on the token side is optional. This means that there is a workflow where manipulating your browser is enough to trick you into making a different payment than you expected.

          1. 2

            That I could understand, but (in my very humble opinion) that sounded more like a CSRF-like vulnerability rather than MitM. Either way, that’s just semantics :)

            1. 2

              It just depends on what you would call end-to-end. I think the idea of calling it MitM is that you don’t trust your desktop and want to trust only the hardware wallet. You still use your desktop for a part of communication, because of convenience and network connection and stuff like that. Turns out, a program taking over the desktop can take over a part of the process that should have been unmodifiable without infiltrating the hardware wallet.

              So MitM is the desktop being able to spoof too much when used to facilitate interaction between you, hardware token and the global blockchain.

        1. 10

          I agree in principle, but sadly users rarely have a choice. Electron developers are the ones developing the only software that does X, and people will just use that because it does X. Electron is eating the lunch of native apps because it covers more market and poisons the well faster than native application writers can write alternatives for.

          1. 3

            I don’t know why, but I found this story really heartwarming. I’m left wondering if I’ve forgotten to love my squashed bugs. I definitely remember some eureka moments with some of them.

            1. 4

              or just Stop Using Git To Deploy, period, full stop.

              1. 5

                More, stop using a VCS to deploy. Git or otherwise is inconsequential.

                1. 5

                  or just Stop Using Git To Deploy, period, full stop.

                  Agreed. I was appalled when I realized the author’s point was that s/pull/fetch + reset/g.

                1. 4

                  It’s a SCM problem. David A. Wheeler has the definitive write-up covering the various angles of it:

                  https://www.dwheeler.com/essays/scm-security.html

                  I’m just throwing a few quick things out rather than being thorough. There’s several logical components at work here. There’s the developers contributions that might each be malicious or modified. The system itself should keep track of them, sanitize/validate them where possible, store in append only storage, snapshots in offline media, and automated tooling for building/analyzing/testing. It’s advantageous to use a highly-isolated machine for building and/or signing with the input being text over a safe channel (eg serial).

                  In parallel, you have the fast box(es) people are actually using for day-to-day development. The isolated machine w/ secure OS periodically pulls in the text to do the things I already described with whatever sandboxing or security tech is available. Signing might be done by coprocessors, dedicated machines, smartcards, or HSM’s. The output goes over a separate line to separate computers that do distribution to the public Internet with no connection to development machines. Onboard software and/or a monitoring solution might periodically check the sources or binary hashes each are creating to ensure they match with ability to automatically or with admin approval shut off distribution side.

                  Simply having updates and such isn’t good enough if the boxes can be hacked from the Internet. Targeted attacks have a lot of room to maneuver on that. The development boxes ideally have no connection to the deployment servers or even company web site. One knowing the latter can’t help hackers discover the former. Those untrusted boxes just have a wire of some sort periodically requesting info they handle carefully or sending info they’ve authenticated. The dev boxes would be getting their own software using the local Internet or off the wall wifi’s if person is really paranoid. Also hardened.

                  It was also common practice to have separate VM’s or especially hardware w/ KVM switches for Internet or personal activities. As in, the software development was completely isolated from sources of malice such as email or the Web. Common theme is evil bits can’t touch the source, build system, or signing key. So, separation, validation, POLA, and safe code everywhere possible.

                  1. 2

                    It’s a SCM problem.

                    Unfortunately, it is not just a SCM problem. I wish the problem was that easy. Supply chain attacks can have at many points during the software-value chain. Wheeler himself brought reproducible builds to attention because of this reason (e.g., a backdooring compiler). Software updates and distribution media are also a common means of attack.

                    All in all, I think it’s a very underdeveloped field in cyber security that has a really wide attack surface and with devastating consequences.

                    Needless to say, David A. Wheeler brought many issues to the table years ago and we’re finally realizing that we need to do something about it :P

                    1. 3

                      The collection, protection, and distribution of software securely via repos is SCM security. It’s a subset of supply, chain security which entails other things such as hardware. Securing that is orthogonal with different methods. Here’s an analysis I did on it if you’re interested in that kind of thing:

                      https://news.ycombinator.com/item?id=10468624

                      David A. Wheeler learned this stuff from the same people I did who invented INFOSEC and high-assurance security. They immediately told us how to counter a lot of the issues with high-assurance methods for developing the systems, SCM for the software, and trusted couriers for hardware developed similarly. Wheeler has a nice page called High Assurance FLOSS that surveys tools and methods. He turned the SCM stuff into that great summary that I send out. I also learned a few new things from it such as the encumberance attack. His goal was that FOSS developers learned high-assurance methods, applied at least medium assurance with safe languages, applied this to everything in the stack from OS’s to compilers to apps, and also developed and used secure SCM like OpenCM and Aegis tried to do. The combination, basically what Karger et al advised starting in MULTICS evaluation, would eliminate most 0-days plus deliver the software securely. Many problems solved.

                      https://www.acsac.org/2002/papers/classic-multics.pdf

                      https://www.usenix.org/system/files/login/articles/1255-perrine.pdf

                      https://en.wikipedia.org/wiki/Trusted_Computer_System_Evaluation_Criteria

                      They didn’t do that, though. Both proprietary sector and FOSS invested massive effort into insecure endpoints, langauges, middleware, configurations, and so on. The repo software that got popular were anything but secure. Being pragmatic, he pivoted to try to reduce risk of issues such as Paul Karger’s compiler-compiler subversion and MITMing of binaries during distribution. His methods for this were Diverse-Double Compilation and reproducible builds. Nice tactics with DDC being hard to evaluate esp given the compiler can still be malicious or buggy (esp optimizing security-critical code). The reproducible builds have their own issues where they eliminate site-specific optimizations or obfuscations since hashes won’t match. I debated that with him on Hacker News with us just disagreeing on the risk/reward tradeoff of those. What we did agree on was that what’s needed and/or idea are a combination of high-assurance endpoints, transports, SCM, and compilers. His site already pushes that. We also agreed economic and social factors have kept FOSS from developing or applying them. Hence, methods like he pushes. The high-assurance, proprietary sector and academia have continuously developed pieces of or whole components like I’ve described with things occasionally FOSSed like CakeML, seL4, SAFEcode, and SPARK. So, it’s doable but they don’t do it.

                      If you’re wondering, the old guard did have a bag of tricks for interim solution. The repo is on highly-secure OS’s with mandatory access control. Two example, the first products actually, in link below. The users connect with terminals with each thing they submit being logged. The system does builds, tests, and so on. It can send things out to untrusted networks that can’t get things in per security policy. Possibly via storage media instead of networking. Guard software also allows humans in the loop to review and allow/deny a code submission or software release. Signing keys are isolated or on a security coprocessor. The computers with source are in an access-controlled, TEMPEST shielded room few can enter. Copies of source in either digital or paper mediums are kept in a locked safe. The system has the ability to restore to trusted state if compromise happens with security-critical actions logged. The people themselves are thoroughly investigated to reduce risk plus paid well. Any one of these helps reduce risk. Fully combining them would cover a lot of it.

                      http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf

                      In 2017, such methods combining isolation, paper, physical protection, and accountable submissions are still way more secure than how most security-critical software is developed today. If people desire, we also have highly-secure OS’s, tons of old hardware probably not subverted (esp if you pay cash for throwaways), good implementations of various cryptosystems, verified or just robust compilers for stuff from crypto DSL’s to C to ML, secure filesystems, secure schemes for external storage, cheap media for write-once backups or distribution, tons of embedded boards from countless suppliers for obfuscation, and so on. This is mostly not an open problem: it’s a problem whose key components have been solved to death with dead simple solutions for the basics like old guard did. Solving it the simple way is just really inconvenient for developers who value productivity and convenience over security. I mean, using GCC, Git, Linux, and 3rd-party services over a hostile Internet on hardware from sneaky companies is both so much easier and set up to fail in countless ways. Have failed in countless ways. If people really care, I tell them to use low-risk components instead with methods that worked in the past and might work again. It’s just not going to be as fun (FOSS) or cheap (proprietary).

                      Quick Note… I do have a cheat based on old pattern of UntrustedProducer/TrustedChecker where you develop everything on comfortable hardware writing the stuff that works down on paper then manually retype in trusted hardware. If it still works, it probably wasn’t subverted. Tediuous but effective. I’ve never seen a targeted, remote, software attack that beat that. Sets bar much higher. Clive Robinson and I also determined infrared was among the safest if you wanted careful communication between electrically-isolated machines. Lots of suppliers, too. Hardware logic for anything trusted can be done in an ASIC on old nodes that are visually inspectable w/ shuttle runs for cost reduction. All the bounds checks and interface protection built-in. Lots of options to let one benefit from modern tooling while maintaining isolation of key components. Just still going to be inconveient, cost more, or both.

                  1. 2

                    Nice post!

                    I’m glad that finally supply chain attacks are both being detected and acknowledged as an issue. Here at NYU, we have been working on a framework called in-toto to address this for over a year ago now. Although I agree with the just use [buzzword] point, I think in-toto is a good way forward to start discussing and addressing the issue.

                    There are some videos of our talks at debconf and dockercon and others in the website.

                    1. 4

                      Lines and lines of rant without a clear goal. I can tell from the context the guy doesn’t like HTTP (or is it Javascript? both?). What part of the “web” did he exactly want to kill and how?

                      1. 3

                        I thought the author addresses this near the beginning:

                        This is the first of two articles. In part one I’m going to review the deep, unfixable problems the web platform has[…] In part 2 I’ll propose a new app platform that is buildable by a small group in a reasonable amount of time

                      1. 2

                        This is pretty nice, but I think it has a couple of flaws. My only knee-jerk reaction was his claim that “hacking is not an academic discipline per se.” It is an academic discipline nowadays, like any other CS field.

                        1. 2

                          There is a project on NYU’s Secure System’s Lab that tries to identify the programming constructs that lead to these ambiguities/misunderstandings. I may be biased, but I think it’s a really interesting project

                          1. 18

                            The first time I released Monocypher, I was wildly over-confident:

                            Monocypher is probably already bug-free.

                            Something tells me this might be the second round of “wildly over-confident”

                            1. 3

                              It’s not obvious from the way it’s styled, but that quote is Loup quoting themselves from that first time around, not a present claim. The text of that quote in the article links to its original context.

                              1. 2

                                Sure but the line below implies he still feels that way.

                                my crypto library, is done and ready for production

                                He speaks about how auditing is important but nothing about how it has been done with his software. I’m sorry but if your crypto has not been audited it is not ready for production.

                                1. 1

                                  Oh, I’m on board with your point! …

                                  we now have a crypto library that could displace Libsodium itself

                                  And, re-parsing your comment now, I think I’m reading it the way you meant, which is not as unfair as the way I first understood it. I think the quote-in-quote threw me off. Sorry!

                              2. [Comment removed by author]

                                1. 17

                                  or:

                                  • Don’t claim to be bug free
                                  • Has been audited more thoroughly
                                  1. 4

                                    It’s 1,300 lines of portable C; auditing it is far easier than libsodium, openssl, etc.

                                    1. 2

                                      That’s cool but until it happens it’s pretty irresponsible to say that it’s production ready.

                              1. 31

                                This post has everything

                                1. Opinionated UX decisions
                                2. Publicly trashing a main project maintainer for something happened 10+ years ago
                                3. PS we’re hiring
                                4. yet another git wrapper that pretends to be easier to use based on 1.
                                1. 19

                                  I don’t think he’s trashing him. He says, “I would have done the same thing”. He’s just trying to figure out what happened. More git annotate than git blame (hey, by the way, why does git not have that alias? svn also has svn praise as another alias in this family.)

                                  Furthermore, the proliferation of git wrappers says something. Mercurial has a lot of users too (Facebook), and guess what, they don’t write wrappers for it. They do write aliases and extensions, using hg’s established customisation mechanisms, but they don’t feel like the entire UI is so terrible that it has to be completely replaced by a different UI. There’s a reason for this – we spend a lot of time thinking in hg about how to make things consistent with itself (in our defense, a lot of the modifications that Facebook does is to make hg more consistent with git). Every time a new feature comes in a lot of time is spent naming that feature, seeing what options it should take, seeing what other similar or related features already exist and what options they use. It’s not a perfect process, and there are some small historical mistakes, but at least we have a process.

                                  1. 1

                                    And those of us who use Hg thank you greatly for that process.

                                  2. 3

                                    Has anyone made a Git equivalent of https://craphound.com/spamsolutions.txt ?

                                    1. -5

                                      I stopped taking the author seriously after they mentioned git’s “user experience”. Git is a tool. It is not there to be pretty or give you a good experience - it’s there to get the job done.

                                      1. 20

                                        Why does being “a tool” give it carte blanche to have bad UX? In fields outside of software tool ergonomics is a serious topic.

                                        1. 9

                                          In the tools I maintain at least, user experience is pretty far up there with one of the most important things to optimize for. (Among other things, like ease of maintenance.)

                                          1. 6

                                            tools are where i most want a good user experience! that extends to the physical realm too; the experience of using a tool that is well-made, sturdy and fits well into your hand is an order of magnitude better than using a shoddy one, even if the latter gets the job done too.

                                            1. 3

                                              This effect is greatly magnified if you use the tool for a long time.

                                              Using a weirdly shaped hammer for 5 minutes is annoying. Using it for 8 hours is unbearable.

                                              Same with digital tools.

                                            2. 5

                                              This is a pretty lame response. Certainly things that get jobs done can have a decent UX. Or at least not a ridiculously confusing one.

                                              1. 2

                                                Bad UX gets in the way of using the tool effectively, it is directly related to getting the job done.

                                                With that said, git gets a lot of bad-rap for having a learning curve, but having a learning curve is not bad UX. Git is the damn good DVCS.

                                              1. 4

                                                I think one of the replies is also really informative:

                                                To abuse this property you need to get the state of the hash to match a state you get when running the decryption of the blockcipher underlying the compression function. Finding such a match requires a meet-in-the-middle attack with cost $2^{n/2}$ and thus isn’t cheaper than finding a collision.

                                              1. 1

                                                The “x is considered harmful” meme is considered harmful.

                                                1. 1

                                                  Hmm, I’ve done some password storage work before. This looks interesting!

                                                  Although, I don’t think there’s a fundamental issue with the design of Horcrux, I’m surprised that nowhere in the code/paper you mentioned host verification on TLS using the TCB, or authenticate with the share servers for that matter. This feels like it could be easily spoofed by a malicious attacker.

                                                  Along the same lines, consider that one possible problem with secret sharing (specifically, Shamir’s), is that a malicious attacker could derive malicious shares to infer information about the secret. You can read some about it on section 2 of this paper (I won’t dive into the specifics of how this applies to horcrux, but it’s worth diving into).

                                                  Good luck! (of course, I’m assuming the author posted this).

                                                  1. 2

                                                    I get a cert error.

                                                    1. 1

                                                      Try without HTTPS, I’m not sure what’s up with the certificate on that site