There are arguments for and against HTTPS for static sites, but what I’ve seen is Troy (and to some extent Scott) making valid points, then talking past other people online (who in turn talk past them). Neither side budges and it descends into the usual online bickering.
There are good reasons to implement HTTPS on a static website, as illustrated by Troy. HTTPS isn’t the only way to secure a transport layer, nor does HTTPS magically stop 100% of man-in-the-middle attacks. There are plenty of attacks on TLS (which is why we use TLS 1.3 and not 1.0), plenty of problems with the openssl monoculture and a web of trust broken by companies and governments to contend with.
It makes sense to advocate for people to use HTTPS where practical to protect their site and users from casual interception. It does not make sense to resort to dark UX patterns, nor to mandate HTTPS. Browsers mandating HTTPS, or warning users that not using HTTPS with full WoT is insecure, then they are against decentralization. The WoT is a centralized transport layer on top of a decentralized protocol.
In the 90s, there were many that advocated for mandatory IPSEC. Indeed, IPSEC is integrated into the IPv6 spec. Better solutions came along, and now IPSEC is losing ground to TLS VPNs.
Protecting user data is a good idea. Pushing users into a fully centralized web, open to abuse by governments and corporations is a bad idea. There are alternatives to TLS out there for people that want them, and we can even build better decentralized alternatives, but if we mandate moves to a centralized web there’ll be too many incentives to stop moving off.
IPSEC was removed from the IPv6 spec a long time ago. Around 2011 it changed from a MUST to a SHOULD, and now it isn’t mentioned at all anymore in the latest RFC that combined all of the various RFCs comprising IPv6: https://tools.ietf.org/html/rfc8200
I have been using IPv6 in some form since at least 2004 and have never once seen it coupled with IPSEC.
The main (only?) argument for putting static sites behind HTTPS is to prevent visitors from getting MITM’d. I’m a little uncomfortable about the unspoken implication that content publishers should be responsible for the security of their visitors but that’s a separate point.
What really annoys me about the push for HTTP on static sites and other benign content is two things:
HTTPS is touted as the best thing since sliced bread but we already know the existing TLS certificate trust chain in mainstream browsers is pretty weak. Certificate authorities have suffered serious security lapses and/or incompetence (Symantec, Wosign), or delegate too freely to likewise entities. Pretty much all developed countries in the world either have government-run CAs or can swoop in and “borrow” the private keys of commercial CAs to sign fraudulent certs or (more likely) decrypt traffic as it goes by. There are things happening to make incremental improvements to these problems but right now the mainstream opinion is just to keep putting band-aids on the system. Don’t get me wrong, HTTPS is better than nothing but the whole trust chain is very half-assed and nobody seems interested in fixing it.
HTTPS is arguably not the right tool for public, non-secret content. As a static content publisher (yes, I use that term loosely), I don’t want to encrypt my content, I only want to sign it to show that it hasn’t been tampered with. But with HTTPS it’s all-or-nothing. If we had secure DNS (however implemented), this would be fairly straightforward: public key in a DNS record and a signature for the page in the HTTP headers. The browser can show the page as signed, clients who don’t have the technology to verify the signature or who don’t care are free to be MITMed at their leisure.
How can visitors secure themselves against MITM attackers without the cooperation of content publishers? Maybe I should be concerned that requiring free content publishers to do more work makes a less useful web.
Public non secret content is tricky. A blog about food isn’t a secret, but access patterns might be. If I only visit the pages about sugary foods, my ISP might sell this data to an advertiser or a health insurance company. This is prevented by TLS encryption. What is the downside of encrypting as well as signing?
If I only visit the pages about sugary foods, my ISP might sell this data to an advertiser or a health insurance company. his is prevented by TLS encryption.
Except it isn’t prevented by TLS. The sugary foods site, using Google Analytics (or even Google hosted jquery or webfonts) will still sell the fact that you were there. If it doesn’t use Google then any externally hosted resource could be used to track you. The blog itself would know which pages you visited and could resell the data. Your ISP can integrate technologies that use techniques that TLS does not defend against. Here’s a video of Vincent Berg’s work on deanonymzing Google Maps over TLS from 2012.
At the very least your ISP will have the metadata about the fact you visited a site with sugary pages, how much data was transferred and when.
The problem here is that the HTTPS infrastructure does not grant sufficiently reliable confidentiality and provides some (occasionally broken) integrity confirmation compared to other more difficult to manage methods.
We’re getting closer and closer to a world where all certificates are in Certificate Transparency logs, which addresses the security concerns around your first point (whether that’s desirable from a data hoarding / secrecy perspective is a totally different aspect).
Regarding your second point, I honestly think that it shouldn’t be you deciding whether you want to encrypt your content. I understand you don’t think it’s necessary, but the goal for all of this is to change the web to provide encryption by default in the long run. Because it makes sense for users.
There are arguments for and against HTTPS for static sites, but what I’ve seen is Troy (and to some extent Scott) making valid points, then talking past other people online (who in turn talk past them). Neither side budges and it descends into the usual online bickering.
There are good reasons to implement HTTPS on a static website, as illustrated by Troy. HTTPS isn’t the only way to secure a transport layer, nor does HTTPS magically stop 100% of man-in-the-middle attacks. There are plenty of attacks on TLS (which is why we use TLS 1.3 and not 1.0), plenty of problems with the openssl monoculture and a web of trust broken by companies and governments to contend with.
It makes sense to advocate for people to use HTTPS where practical to protect their site and users from casual interception. It does not make sense to resort to dark UX patterns, nor to mandate HTTPS. Browsers mandating HTTPS, or warning users that not using HTTPS with full WoT is insecure, then they are against decentralization. The WoT is a centralized transport layer on top of a decentralized protocol.
In the 90s, there were many that advocated for mandatory IPSEC. Indeed, IPSEC is integrated into the IPv6 spec. Better solutions came along, and now IPSEC is losing ground to TLS VPNs.
Protecting user data is a good idea. Pushing users into a fully centralized web, open to abuse by governments and corporations is a bad idea. There are alternatives to TLS out there for people that want them, and we can even build better decentralized alternatives, but if we mandate moves to a centralized web there’ll be too many incentives to stop moving off.
IPSEC was removed from the IPv6 spec a long time ago. Around 2011 it changed from a MUST to a SHOULD, and now it isn’t mentioned at all anymore in the latest RFC that combined all of the various RFCs comprising IPv6: https://tools.ietf.org/html/rfc8200
I have been using IPv6 in some form since at least 2004 and have never once seen it coupled with IPSEC.
RFC 4294 - IPv6 Node Requirements: IPsec MUST
RFC 6434 - IPv6 Node Requirements: IPsec SHOULD
Thanks for pointing that out. I didn’t know that it’d been removed.
The main (only?) argument for putting static sites behind HTTPS is to prevent visitors from getting MITM’d. I’m a little uncomfortable about the unspoken implication that content publishers should be responsible for the security of their visitors but that’s a separate point.
What really annoys me about the push for HTTP on static sites and other benign content is two things:
HTTPS is touted as the best thing since sliced bread but we already know the existing TLS certificate trust chain in mainstream browsers is pretty weak. Certificate authorities have suffered serious security lapses and/or incompetence (Symantec, Wosign), or delegate too freely to likewise entities. Pretty much all developed countries in the world either have government-run CAs or can swoop in and “borrow” the private keys of commercial CAs to sign fraudulent certs or (more likely) decrypt traffic as it goes by. There are things happening to make incremental improvements to these problems but right now the mainstream opinion is just to keep putting band-aids on the system. Don’t get me wrong, HTTPS is better than nothing but the whole trust chain is very half-assed and nobody seems interested in fixing it.
HTTPS is arguably not the right tool for public, non-secret content. As a static content publisher (yes, I use that term loosely), I don’t want to encrypt my content, I only want to sign it to show that it hasn’t been tampered with. But with HTTPS it’s all-or-nothing. If we had secure DNS (however implemented), this would be fairly straightforward: public key in a DNS record and a signature for the page in the HTTP headers. The browser can show the page as signed, clients who don’t have the technology to verify the signature or who don’t care are free to be MITMed at their leisure.
How can visitors secure themselves against MITM attackers without the cooperation of content publishers? Maybe I should be concerned that requiring free content publishers to do more work makes a less useful web.
Public non secret content is tricky. A blog about food isn’t a secret, but access patterns might be. If I only visit the pages about sugary foods, my ISP might sell this data to an advertiser or a health insurance company. This is prevented by TLS encryption. What is the downside of encrypting as well as signing?
Except it isn’t prevented by TLS. The sugary foods site, using Google Analytics (or even Google hosted jquery or webfonts) will still sell the fact that you were there. If it doesn’t use Google then any externally hosted resource could be used to track you. The blog itself would know which pages you visited and could resell the data. Your ISP can integrate technologies that use techniques that TLS does not defend against. Here’s a video of Vincent Berg’s work on deanonymzing Google Maps over TLS from 2012.
At the very least your ISP will have the metadata about the fact you visited a site with sugary pages, how much data was transferred and when.
The problem here is that the HTTPS infrastructure does not grant sufficiently reliable confidentiality and provides some (occasionally broken) integrity confirmation compared to other more difficult to manage methods.
We’re getting closer and closer to a world where all certificates are in Certificate Transparency logs, which addresses the security concerns around your first point (whether that’s desirable from a data hoarding / secrecy perspective is a totally different aspect).
Regarding your second point, I honestly think that it shouldn’t be you deciding whether you want to encrypt your content. I understand you don’t think it’s necessary, but the goal for all of this is to change the web to provide encryption by default in the long run. Because it makes sense for users.