1. 18
  1.  

  2. 22

    The hubris of this is amazing. “the solution becomes trivial” it says, before describing a system which involves showing a blocking-notification during regular browsing.

    I’m morbidly curious how subresources would be handled. A blocking prompt for every new domain included in the page? That’d be fun.

    1. 1

      I guess you could use wildcard certs for sub domains.

      1. 3

        Right, but various sites load scripts and images from various other places, not just their own subdomains. A typical Verge page for example loads things from over 10 different domains.

        Would sites stop doing that? If so, maybe I could get behind this idea after all…

        1. 2

          The first thing I thought of was “omg yes, no more ads!”. I could definitely get behind that. I wouldn’t mind removal of CDN style loading of external resources either.

    2. 30

      Click. Scroll.

      If this is the first time you are visiting this website, a warning message will appear.

      Close tab.

      Leave a comment linking to Alice in Warningland.

      1. 1

        I’m guessing that, because of new free SSL services, that SSL is more widespread than ever.

        There are so many websites out there that want to use the extra trust that SSL provides, but they don’t have the same seriousness that a bank or Facebook has. Even on top of the fact that warnings are terrible, most websites do not deserve or want the seriousness that comes along with warnings.

        1. -3

          Yes the alternative is much better. Let these people whom we shall call “certificate authorities” tell us what can be trusted.

          After all, if we can’t trust the NSA, who can we trust?

          1. 4

            Yes the alternative is much better.

            Exactly.

        2. 15

          This is confusing, because you can do this today. Just remove all the CAs from your trust store (your OS or browser will even have a UI to do it!) and then add exceptions as you visit sites.

          1. 2

            Nah, HSTS will spoil your fun :(

          2. 12

            The hand-waving reliance on a “sufficiently smart UX designer” solving users uncanny ability to blindly click through any number of warnings aside, and the blindness to complications like subdomains, this proposal completely punts on the revocation problem.

            If a site loses control of its key, how does it go about notifying clients of the revocation? Do I just get a new warning? How could I possibly distinguish that from a MitM? The sheer flood of constant warnings would train users to ignore them.

            This is no better than blindly trusting self-signed certs.

            1. 9

              “sufficiently smart UX designer”

              A sufficiently smart UX designer is indistinguishable from magic. – Arthur C. Clarke, probably

            2. 4

              Once we realise the true nature of authenticity on the web, the solution becomes trivial.

              If it’s trivial, it should be simple and watertight.

              But what else? Some UX designers should really jump in here to create a seamless yet secure user experience.

              1

              Of course care has to be taken here to ensure that the process of identifying possible malicious sites is transparent and not abused for e.g. censorship to block undesirable content.

              2

              How do you know this new website you are visiting is genuine? How do you know you can trust it? Well… the idea to use a kind of global network view outlined above is supposed to detect the obviously malicious sites. But actually, building real trust is a process that takes time

              3

              Some minor changes would be required to not exchange certificates but instead rely on public key stored by the browser (or keys sent by the webserver the first time the site is visited).

              4 handwaves. That doesn’t mean it’s a bad idea, but it’s definitely not simple, trivial, and watertight.

              1. 4

                Problem: The system that gives users some degree of assurance that they can trust a site can sometimes be subverted.

                Proposed solution: Give users absolutely no information about whether a site they haven’t been to before is trustworthy or is even the site it claims to be.

                Seems like a fitting choice for a paranoid darknet, but I don’t see Google and Mozilla jumping on it any time soon.

                1. 2

                  any way we can tweak the ranking algorithm for posts like this? I think it’s only scoring highly because commenters keep dunking on it

                  1. 2

                    A teeny thing of note that might give you pause before dismissing the author’s ideas out of hand: he has been thinking and publishing about these kinds of things for quite a while already. See for instance his extensive list of publications.

                    Yes, there are obvious flaws to the argument as given. That does not mean the idea as presented is without merit. And I would take the blogpost as stating an idea, and a lead to investigate. He even (helpfully) handwaves over the problems to be solved, stating that they exist.

                    1. 2

                      There’s been more useful proposals to fix TLS CAs, like Convergence, that had code written, would have been more transparent to end-users, and still went nowhere.

                    2. 2

                      For everyone who seems to think this method is dumb (I am not advocating one way or another.), I have some questions for you:

                      • Do you validate SSH fingerprints prior to connecting to a new machine?
                        • What methods do you use to validate said fingerprints?
                      • Do you agree that paypa1 vs paypal - both having valid certificates for their respective domains - is an issue that will likely trip up someone who is not paying very close attention?

                      Just yesterday I overheard someone questioning weather or not a fishing email from paypal was valid. The first question asked of said person was: “do you have a paypal account?”. They did not have a paypal account.

                      To me this means the issue is much larger than how we manage our PKI. No matter what solution the industry decides on (if any).. there will still be people who simply don’t know the basics.. and will be vulnerable.

                      1. 2

                        Do you validate SSH fingerprints prior to connecting to a new machine?

                        Yes.

                        What methods do you use to validate said fingerprints?

                        I ask someone what the ssh fingerprint is supposed to be, for example over the phone. Since it is a hash, and breaking TCP is hard, I am generally satisfied after a few octets.

                        For machines managed by my team, I have the further advantage of having /etc/ssh/ssh_known_hosts managed.

                        Do you agree that paypa1 vs paypal - both having valid certificates for their respective domains - is an issue that will likely trip up someone who is not paying very close attention?

                        No.

                        • The “1” and the “l” are nowhere near each other on most keyboards
                        • paypa1.com is owned by paypal.com

                        The more general problem can be solved in better ways.

                        1. 2

                          I should have been more clear on my l vs 1 statement. I know typing them is obviously different. I am speaking more to receiving an email with them.. People are likely not going to notice the difference.

                          Where did you determine that paypa1.com is owned by paypal (I know it can be view via whois or similar.. the question is rhetorical)? Are regular users going to be able to do so? How is looking up ownership any different from excepting a fingerprint at this point?

                          The more general problem can be solved in better ways.

                          I completely agree.. I just don’t think we have a proper definition of the problem.

                          1. 2

                            I am speaking more to receiving an email with them.. People are likely not going to notice the difference.

                            People should not click links in an unauthenticated email full stop.

                            How is looking up ownership any different from excepting a fingerprint at this point?

                            The difference is that we have a small number of authorities (ostensibly) maintaining a concept of ownership. Removing authorities means everyone has to verify (to their own definition of) ownership directly; Distributing (and even completely delegating) trust is easier than constant vigilance, and may be more secure (measured by sum unhedged fiscal risk) in practice.

                            I completely agree.. I just don’t think we have a proper definition of the problem.

                            I was specifically referring to the problem of “trip[ping] up someone who is not paying very close attention”.

                            If people do not click links in an unauthenticated email, they will not be tripped up by the similarities between paypal and paypal and paypa⏐ and other such things.

                            1. 1

                              People should not click links in an unauthenticated email full stop.

                              If people do not click links in an unauthenticated email, they will not be tripped up by the similarities between paypal and paypal and paypa⏐ and other such things.

                              I think this is an unreasonable expectation and akin to saying:

                              “If people would just stop having the desire to compromise systems or take advantage of people, this wouldn’t be an issue!!”

                              1. 1

                                Some email clients now track high-profile senders with special icons to help verify the authenticity, and spam filters have (for a long time) used the presence or absence of certain signatures to bin risky clicks.

                                Authentication isn’t binary; You never prove that a message is authentic in the mathematical sense because of things like rubber hoses.

                                However we can taint that link with the degree of confidence we have in its authenticity, and whilst this is purely a mental exercise, I have seen too many people click on something, with no expectation other than to figure out what it is. This is what I mean when I say people should not click links in an unauthenticated email full stop.