Regarding the gripe about Gemini - retrocomputing was never its goal. It was about reforming the browsing experience of the modern user, where code execution or unexpected downloads cannot happen behind your back. Guaranteed TLS was deemed table stakes - for each person who complains about it, there is another who would never touch Gemini if all/much of their browsing was trivially observable by third parties. Gemini was never intended to supplant gopher. The protocol author mentioned continues to maintain both gopher and gemini sites, and gopher would be the right choice when encryption is inappropriate, such as retrocomputing or amateur radio.
I don’t quite know what to think of the TLS requirement in Gemini, either, but low-power computing and/or low-speed networks doesn’t necessarily mean old computers and networks. Modern low-power machines with low-speed connections can handle TLS just fine. See e.g. this thread: https://lists.orbitalfox.eu/archives/gemini/2020/002466.html for an older example of someone running a Gemini client on an ESP32.
(Full disclosure: not under this alias – which, for better or for worse, I ended up using in some professional settings – but I am running a Gemini-related project. I have zero investment in it, it’s just for fun, and I was one coin toss away from using Gopher, I’m just sort of familiar with the protocol).
Yes, I think it’s a relative statement as well. Low-power systems today are magnitudes more performant. The little 68030 I did some testing on takes over 20 seconds to complete a TLS 1.2 transaction, but even a few years old embedded systems today will run rings around that.
For retro systems, I still say Gopher is the best fit.
But then, why reinvent the wheel? Instead of implementing a whole new protocol, a more sensible decision would have been to simply develop a modern HTML 3.2 browser without the JS crap. Just freeze the pinnacle of HyperText before the web became the edge of Hell it is today.
I agree about Gemini, The one thing I wish they had done differently is used much much simpler crypto for integrity and not bother about confidentiality. Pulling in TLS was a shame as it missed out on a great opportunity.
Highly agree with all points. Also, I had no idea that you could auto-upgrade modern browsers, while keeping support for old browsers. That’s really cool!
Another problem of HTTPS is from the site maintainer’s perspective. I don’t primarily serve my personal web site over HTTPS because I don’t want to deal with authority-signed certificates. I would find it really worrying if the availability of my site depended on negotiating with Let’s Encrypt every 90 days. I’d much rather just use HTTP and offer HTTPS over a self-signed certificate, because there’s really very little difference in terms of “security”.
I’d much rather just use HTTP and offer HTTPS over a self-signed certificate, because there’s really very little difference in terms of “security”.
There is a fairly big difference, unless your users are doing certificate pinning. With HTTP, any hop on the network (your ISP, whoever is running the WiFi AP that you’ve connected to, and so on) can passively see what you’re reading and can tamper with it (e.g. inject ads / malware).
With a self-signed certificate, they can’t passively snoop it, but they can trivially MITM the connection by running a proxy that negotiates a TLS session with the server and the client, with their own self-signed cert. There are off-the-shelf devices that do this automatically at line rate. Someone running a malicious AP may use your site to inject malware into connections from your users. Without certificate pinning (which causes problems when you do cert rollover) your users have no way of knowing whether this has happened.
With a Let’s Encrypt cert, only someone who can tamper with Let’s Encrypt’s DNS can spoof and / or their root to your server could issue a cert for your server. This isn’t completely infeasible, but it’s definitely not something that a random person with a $200 computer and a wireless access point can do. If you use DNSSEC, you can also publish a CAA record so that no other CAs can issue certs for your domain, which means that someone would need to actively compromise Let’s Encrypt and intercept and modify packets from your users’ computers to your server to be able to tamper with your content on the way to your users.
In terms of admin load, it takes half an hour to set up something like acme.sh and test it. It then runs in a nightly cron job and renews certs a month before they expire. It has to fail 30 days in a row before there are any problems and the cron emails will scream at you for a while before that that happens.
There is a fairly big difference, unless your users are doing certificate pinning.
Doesn’t every browser do this automatically? If you choose to trust a self-signed certificate, then the browser will warn you if it changes.
Edit: Apparently, Firefox trusts addresses instead of certificates. It just ignores certificate errors on trusted addresses, stupidly enough. Couldn’t it simply trust the self-signed certificate itself? Maybe I’m thinking incorrectly here.
It has to fail 30 days in a row before there are any problems and the cron emails will scream at you for a while before that that happens.
Sounds lovely… in all seriousness, though, while it is not a lot of literal work, it is something that I have to worry about that I otherwise wouldn’t need to. It just feels like the survival of my web site is on the line every 90 days or so. Can I trust the automation I set up? Probably. Can I trust it to work unattended for a year? Maybe. Two years?
Nothing else in (my) web server administration is like this. Apache just keeps running, without depending on a third-party company to continually grace me with some certificate.
Doesn’t every browser do this automatically? If you choose to trust a self-signed certificate, then the browser will warn you if it changes.
And then what? It’s good security practice to roll over certificates periodically and most enforce this with relatively short validity times (one year is considered a long time for a TLS certificate). If I go to your site and get a new certificate, what should I do?
Sounds lovely… in all seriousness, though, while it is not a lot of literal work, it is something that I have to worry about that I otherwise wouldn’t need to. It just feels like the survival of my web site is on the line every 90 days or so. Can I trust the automation I set up? Probably. Can I trust it to work unattended for a year? Maybe. Two years?
Yes, quite easily. I moved from ACME v1 with HTTP queries to ACMEv2 with DNS queries a couple of years ago for the extra functionality but the setup that I created four years ago (which took about half an hour) would still be running today if I hadn’t.
Nothing else in (my) web server administration is like this. Apache just keeps running, without depending on a third-party company to continually grace me with some certificate.
I don’t really buy this argument. You depend on third-party companies for DNS (if you’re running your own DNS server, at least for the SOA record but also for maintaining the registration), for network connectivity, and so on. What makes attestation of identity any different?
I don’t really buy this argument. You depend on third-party companies for DNS (if you’re running your own DNS server, at least for the SOA record but also for maintaining the registration), for network connectivity, and so on. What makes attestation of identity any different?
Well, first of all, if I use (authority-certified) HTTPS in addition to DNS, I need to rely on two companies instead of one. One of these is absolutely necessary, so I can’t get rid of it. The other one is not, so I prefer not to burden myself with it.
Second of all, there are fairly big differences. I can pay for DNS and not have to worry about it. It doesn’t require anything on my or my server’s part except money. This is the most important aspect for me, but furthermore, if I’m unsatisfied with one such provider, I can easily switch to another. There are a myriad of DNS providers, while there is only one Let’s Encrypt.
(Also, I want to clarify that I’m arguing from the perspective of the maintainer of a personal web site – not a site on which security is of utmost importance. Such sites should clearly use (authority-certified) HTTPS.)
Add your domain to HTTPS preload list as used by “modern” browsers
Modern browsers will always use HTTPS, “legacy” browsers will always use HTTP and never go to HTTPS… This of course assumes that browsers that support the HTTPS preload list are (auto) updated regularly to support new TLS versions/ciphers as needed.
I recently removed auto-redirection of http to https from my site, mainly because of the horrible experience of the letsencrypt client breaking and wanting me to install a custom version via snap (on a Debian server). The result was that until I had the time to fix the issue, my website was totally broken if the browser had remembered to redirect. I nearly wanted to give up HTTPS all together, but luckily I managed to fix it using a different client (acme.sh).
That’s not an issue here. The modern browser will use HTTPS because it observed an HSTS header after a redirect, or because the domain is preloaded, so no injection of malware there. The old browser will not support the HTTPS page, but can still attempt HTTP. For your malicious ISP it doesn’t matter if your webserver answers a redirect to HTTPS or content on HTTP, after a succesful MITM it will be whatever they want it to be.
It’s true that any ISP that is willing to just serve you a completely different site can do so if you connect over unencrypted HTTP. But for ISPs that aren’t willing to go that far—who are only willing to inject ads or Bitcoin-mining JavaScript into pages that are otherwise the ones you were asking for—that activity will be prevented by upgrading all HTTP connections to HTTPS.
Put another way, there are different levels of malice that are possible from an ISP, and upgrading HTTP to HTTPS won’t defend against all of them but it can defend against some of them.
Maybe it’s time to bring (back?) proxies that accept unencrypted HTTP/1.0 requests, negotiate a modern version of TLS with the destination and rewrite the html to allow for seamless navigation on older browsers.
For occasional web browsing from OS 9, I have Squid running on a local server, acting as an HTTPS proxy. The client still connects over HTTPS, but the Squid server accepts older protocols, which the destination usually doesn’t accept.
Since legacy software shouldn’t be exposed to the wide Internet without at least some protective layer, I think HTTPS-to-HTTP proxies is a preferable option. There are some projects, though they aren’t as easy to use as I hoped.
A proxy server can also perform some other adjustments to make pages more accessible to legacy browsers, e.g. inject polyfills as needed.
Are there any stats on how many devices are used for browsing and whose hardware can’t support TLS 1.2? That’s the oldest version of TLS that’s secure.
Is there a full guide on how to configure an Nginx server like this? I agree with the reasoning but can’t imagine how I would get started. Maybe some sort of Nginx template would help?
Regarding the gripe about Gemini - retrocomputing was never its goal. It was about reforming the browsing experience of the modern user, where code execution or unexpected downloads cannot happen behind your back. Guaranteed TLS was deemed table stakes - for each person who complains about it, there is another who would never touch Gemini if all/much of their browsing was trivially observable by third parties. Gemini was never intended to supplant gopher. The protocol author mentioned continues to maintain both gopher and gemini sites, and gopher would be the right choice when encryption is inappropriate, such as retrocomputing or amateur radio.
From the Gemini FAQ:
So it does seem that there’s some tension there…
I don’t quite know what to think of the TLS requirement in Gemini, either, but low-power computing and/or low-speed networks doesn’t necessarily mean old computers and networks. Modern low-power machines with low-speed connections can handle TLS just fine. See e.g. this thread: https://lists.orbitalfox.eu/archives/gemini/2020/002466.html for an older example of someone running a Gemini client on an ESP32.
(Full disclosure: not under this alias – which, for better or for worse, I ended up using in some professional settings – but I am running a Gemini-related project. I have zero investment in it, it’s just for fun, and I was one coin toss away from using Gopher, I’m just sort of familiar with the protocol).
Yes, I think it’s a relative statement as well. Low-power systems today are magnitudes more performant. The little 68030 I did some testing on takes over 20 seconds to complete a TLS 1.2 transaction, but even a few years old embedded systems today will run rings around that.
For retro systems, I still say Gopher is the best fit.
Yes, contrasting this with @jcs’s post, it does look like a dichotomy.
But then, why reinvent the wheel? Instead of implementing a whole new protocol, a more sensible decision would have been to simply develop a modern HTML 3.2 browser without the JS crap. Just freeze the pinnacle of HyperText before the web became the edge of Hell it is today.
See the Gemini FAQ section 2.5
My memories of those days weren’t so halcyon, just table soup.
It’s because the point of Gemini is to be intentionally exclusionary.
I agree about Gemini, The one thing I wish they had done differently is used much much simpler crypto for integrity and not bother about confidentiality. Pulling in TLS was a shame as it missed out on a great opportunity.
So what crytpo, and what libraries for which languages exist for it? I ask because the wisdom is not to invent crypto, nor implement it yourself.
This is like asking highway engineers to maintain a horse-and-buggy lane on the freeway. Extra work, inefficient, and potentially dangerous.
If you want some decrepit machine to hit the internet over HTTP, put it behind your own sslstrip proxy or something.
Highly agree with all points. Also, I had no idea that you could auto-upgrade modern browsers, while keeping support for old browsers. That’s really cool!
Another problem of HTTPS is from the site maintainer’s perspective. I don’t primarily serve my personal web site over HTTPS because I don’t want to deal with authority-signed certificates. I would find it really worrying if the availability of my site depended on negotiating with Let’s Encrypt every 90 days. I’d much rather just use HTTP and offer HTTPS over a self-signed certificate, because there’s really very little difference in terms of “security”.
There is a fairly big difference, unless your users are doing certificate pinning. With HTTP, any hop on the network (your ISP, whoever is running the WiFi AP that you’ve connected to, and so on) can passively see what you’re reading and can tamper with it (e.g. inject ads / malware).
With a self-signed certificate, they can’t passively snoop it, but they can trivially MITM the connection by running a proxy that negotiates a TLS session with the server and the client, with their own self-signed cert. There are off-the-shelf devices that do this automatically at line rate. Someone running a malicious AP may use your site to inject malware into connections from your users. Without certificate pinning (which causes problems when you do cert rollover) your users have no way of knowing whether this has happened.
With a Let’s Encrypt cert, only someone who can tamper with Let’s Encrypt’s DNS can spoof and / or their root to your server could issue a cert for your server. This isn’t completely infeasible, but it’s definitely not something that a random person with a $200 computer and a wireless access point can do. If you use DNSSEC, you can also publish a CAA record so that no other CAs can issue certs for your domain, which means that someone would need to actively compromise Let’s Encrypt and intercept and modify packets from your users’ computers to your server to be able to tamper with your content on the way to your users.
In terms of admin load, it takes half an hour to set up something like acme.sh and test it. It then runs in a nightly cron job and renews certs a month before they expire. It has to fail 30 days in a row before there are any problems and the cron emails will scream at you for a while before that that happens.
Doesn’t every browser do this automatically? If you choose to trust a self-signed certificate, then the browser will warn you if it changes.
Edit: Apparently, Firefox trusts addresses instead of certificates. It just ignores certificate errors on trusted addresses, stupidly enough. Couldn’t it simply trust the self-signed certificate itself? Maybe I’m thinking incorrectly here.
Sounds lovely… in all seriousness, though, while it is not a lot of literal work, it is something that I have to worry about that I otherwise wouldn’t need to. It just feels like the survival of my web site is on the line every 90 days or so. Can I trust the automation I set up? Probably. Can I trust it to work unattended for a year? Maybe. Two years?
Nothing else in (my) web server administration is like this. Apache just keeps running, without depending on a third-party company to continually grace me with some certificate.
And then what? It’s good security practice to roll over certificates periodically and most enforce this with relatively short validity times (one year is considered a long time for a TLS certificate). If I go to your site and get a new certificate, what should I do?
Yes, quite easily. I moved from ACME v1 with HTTP queries to ACMEv2 with DNS queries a couple of years ago for the extra functionality but the setup that I created four years ago (which took about half an hour) would still be running today if I hadn’t.
I don’t really buy this argument. You depend on third-party companies for DNS (if you’re running your own DNS server, at least for the SOA record but also for maintaining the registration), for network connectivity, and so on. What makes attestation of identity any different?
Well, first of all, if I use (authority-certified) HTTPS in addition to DNS, I need to rely on two companies instead of one. One of these is absolutely necessary, so I can’t get rid of it. The other one is not, so I prefer not to burden myself with it.
Second of all, there are fairly big differences. I can pay for DNS and not have to worry about it. It doesn’t require anything on my or my server’s part except money. This is the most important aspect for me, but furthermore, if I’m unsatisfied with one such provider, I can easily switch to another. There are a myriad of DNS providers, while there is only one Let’s Encrypt.
(Also, I want to clarify that I’m arguing from the perspective of the maintainer of a personal web site – not a site on which security is of utmost importance. Such sites should clearly use (authority-certified) HTTPS.)
Could this work as well? Maybe easier…
Modern browsers will always use HTTPS, “legacy” browsers will always use HTTP and never go to HTTPS… This of course assumes that browsers that support the HTTPS preload list are (auto) updated regularly to support new TLS versions/ciphers as needed.
I recently removed auto-redirection of http to https from my site, mainly because of the horrible experience of the letsencrypt client breaking and wanting me to install a custom version via snap (on a Debian server). The result was that until I had the time to fix the issue, my website was totally broken if the browser had remembered to redirect. I nearly wanted to give up HTTPS all together, but luckily I managed to fix it using a different client (acme.sh).
I don’t like the idea. Malicious ISPs can inject malware into into insecure webpages.
That’s not an issue here. The modern browser will use HTTPS because it observed an HSTS header after a redirect, or because the domain is preloaded, so no injection of malware there. The old browser will not support the HTTPS page, but can still attempt HTTP. For your malicious ISP it doesn’t matter if your webserver answers a redirect to HTTPS or content on HTTP, after a succesful MITM it will be whatever they want it to be.
It’s true that any ISP that is willing to just serve you a completely different site can do so if you connect over unencrypted HTTP. But for ISPs that aren’t willing to go that far—who are only willing to inject ads or Bitcoin-mining JavaScript into pages that are otherwise the ones you were asking for—that activity will be prevented by upgrading all HTTP connections to HTTPS.
Put another way, there are different levels of malice that are possible from an ISP, and upgrading HTTP to HTTPS won’t defend against all of them but it can defend against some of them.
Maybe it’s time to bring (back?) proxies that accept unencrypted HTTP/1.0 requests, negotiate a modern version of TLS with the destination and rewrite the html to allow for seamless navigation on older browsers.
For occasional web browsing from OS 9, I have Squid running on a local server, acting as an HTTPS proxy. The client still connects over HTTPS, but the Squid server accepts older protocols, which the destination usually doesn’t accept.
How do you have Squid configured? Is this using bumping?
Yes, here’s the configuration that I got working. A lot of it is likely redundant
Since legacy software shouldn’t be exposed to the wide Internet without at least some protective layer, I think HTTPS-to-HTTP proxies is a preferable option. There are some projects, though they aren’t as easy to use as I hoped.
A proxy server can also perform some other adjustments to make pages more accessible to legacy browsers, e.g. inject polyfills as needed.
Or, use a period browser that can be taught to forward HTTPS on (disclaimer: my project, previously posted): https://oldvcr.blogspot.com/2020/11/fun-with-crypto-ancienne-tls-for.html
Are there any stats on how many devices are used for browsing and whose hardware can’t support TLS 1.2? That’s the oldest version of TLS that’s secure.
Can I Use shows good support in software since 2014. https://caniuse.com/tls1-2
Is there a full guide on how to configure an Nginx server like this? I agree with the reasoning but can’t imagine how I would get started. Maybe some sort of Nginx template would help?
The Vary header should always be sent. Otherwise, the non-301 response could be cached.