1. 22
  1.  

  2. 8

    To be fair, they should also mark as “Not Secure” any page running JavaScript.

    Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
    (Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)

    1. 11

      By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.

      1. [Comment removed by author]

        1. 5

          Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.

          1. 3

            Why on earth would I give anyone else my private certificate?

            1. 4

              Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)

              Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)

              Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)

              Lastly, of course, are Terms Of Service, different from the above by at least being above-board.

          2. 2

            No.

            It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.

            1. 11

              With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.

              1. 1

                Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
                Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.

                What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!

              2. 3

                WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.

                I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)

                1. 4

                  CDNs are man-in-the-middle attacks.

              3. 1

                As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.

                1. 0

                  MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.

                  Well… how can I say that… I don’t think so.

                  Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                  Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.

                  If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.

                  Is this something webmasters should care? I think so.

                  1. 4

                    Selling HTTPS MitM proxy as a security solutions is plain incompetence.

                    Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.

                    1. 5

                      There is an entire industry around products that do this

                      There is an entire industry around rasomware. But this does not means it’s a security solution.

                      1. 1

                        It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.

                        What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.

                        1. 1

                          I wonder if you did read the articles I linked…

                          The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.

                          In this context, we need to grant to people accessibility and security.

                          An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.

                          1. 1

                            I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).

                            I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.

                            And that is even without going into which content is safe to be cached in a given environment.

                            1. 1

                              And that is even without going into which content is safe to be cached in a given environment.

                              Yes, this is the best objection I’ve read so far.

                              As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.

                              But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.

                    2. 2

                      HTTPS proxy isn’t incompetence, it’s industry standard.

                      They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).

                      Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                      1. 2

                        Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.

                        Browsers bypass the network configuration to protect the users’ privacy.
                        (I agree this is stupid, but they are trying to push this anyway)

                        The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.

                        It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.

                        And, doing that in a school or a public library is dangerous and plain stupid.

                        1. 0

                          Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.

                          Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.

                          Browsers bypass the network configuration to protect the users’ privacy.

                          Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                          1. 1

                            Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.

                            Yes this is true.

                            If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.

                            Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.

                            Did you know about Firefox’s DoH/CloudFlare affair?

                            1. 2

                              Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.

                              It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.

                              1. 1

                                TBH, I don’t know what you mean with “security maximalism”.

                                I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.

                                Mozilla has a contract with CloudFlare to protect the user privacy

                                It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                AFAIK, even Facebook had a contract with his users.

                                Yeah.. I know… they will “do no evil”…

                                1. 1

                                  Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                  It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.

                                  Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.

                                  AFAIK, even Facebook had a contract with his users

                                  Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.

                                  1. 1

                                    Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.

                                    You should define “common user”.
                                    If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
                                    The problem is for those people who are actually useful to the society.

                                    Cloudflare hasn’t done much that makes me believe they will violate my privacy.

                                    The problem with Cloudflare is not what they did, it’s what they could do.
                                    There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                    But my concerns are with Mozilla.
                                    They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                    1. 1

                                      So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                      Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.

                                      There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.

                                      Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.

                                      they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                      You mean safe because everyone involved knows what’s happening?

                                      1. 1

                                        I don’t believe the concerns are really concerns for the common user.

                                        You should define “common user”.
                                        If you mean the politically inepts who are happy to be easily manipulated…

                                        So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?

                                        I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
                                        Let’s assume the first… for now.

                                        I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
                                        That’s it.

                                        they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.

                                        You mean safe because everyone involved knows what’s happening?

                                        Really?
                                        Are you sure everyone understand what is a MitM attack? Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.

                                        A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.

                                        As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
                                        I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript
                                        All this is very suspect for a company that claims to care about users’ privacy!

                                        1. 0

                                          I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.

                                          I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.

                                          Are you sure everyone understand what is a MitM attack?

                                          An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.

                                          Are you sure every employee understand their system administrators can see the mail they reads from GMail?

                                          Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.

                                          And it extends the attack surface, both for the users and the company.

                                          And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)

                                          And they ship WebAssembly.

                                          And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.

                                          And you have to edit about:config to disable JavaScript…

                                          Or install a half-way competent script blocker like uMatrix.

                                          All this is very suspect for a company that claims to care about users’ privacy!

                                          I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.

                                          1. 1

                                            An attack requires an adversary, the evil one.

                                            According to this argument, you don’t need HTTPS until you don’t have an enemy.
                                            It shows very well your understanding of security.

                                            The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                            I have on concerns about WebAssembly.

                                            Not a surprise.

                                            Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                            Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                            As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                            1. 1

                                              According to this argument, you don’t need HTTPS until you don’t have an enemy.

                                              If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.

                                              It shows very well your understanding of security.

                                              My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.

                                              There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.

                                              The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.

                                              Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.

                                              Malory sits between Eve and Bob not Bob and Alice.

                                              Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).

                                              I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.

                                              It’s not my duty or problem to debug web applications that I don’t develop.

                                              Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.

                                              Then don’t do it? Nobody is forcing you.

                                              As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.

                                              I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.

                      2. 2

                        My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.

                        1. 3

                          With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.

                          The lack of awareness makes MitM caching worse.

                  2. 2

                    Also see Dave Winer’s [1] counterpoint: http://this.how/googleAndHttp

                    [1] http://scripting.com/?tab=about

                    1. 2

                      There’s a little bit of tinfoil-hattery going on in that article, but I don’t think he’s totally wrong. The Internet has matured to the point now where most of the walled gardens are about as big as they’re going to get, so the only growth potential left is destroy the community gardens. It’s not at all unlike Ford and GM’s deliberate nationwide dismantling of public transportation throughout the 20th century.

                    2. 0

                      If both http:// and https:// is available, I think Chrome should redirect to the https:// page instead of complaining about the http:// counterpart.

                      1. 10

                        There’s no guarantee that the site on port 80 is the site as the one on port 443. That’s why HTTPS Everywhere is a whitelist of sites where this is true.

                        1. 1

                          Are there even many websites left that don’t redirect themself to the secure version? I know there used to be a bunch but pretty much everything I see does now.