I don’t know where the author was during the late twenty-teens, but I recall the HTTPS revolution being quite noisy with the whole “ENCRYPT ALL THE THINGS!” being screamed from the roof tops. Also, does this mean I need HTTPS for connecting to 127/8 and 192.168/16 networks? I either need to expose the internal network or run my own CA (and if I’m correctly listening to “security experts” is something I should not do)?
I know but since this month I both got a new job - with crappy WiFi in office - and a Mac to go with it - so I’m affected. I don’t use these networks at home usually :)
Even it’s the only thing, it’s still open. Whereas now nothing is open. It may be low risk, but compared to 0, it’s a lot.
Another thing though is that someone needs to set that up. I did configure my own bind instance 20 years ago, personally and professionally, but most people don’t have a clue how to set up WiFi semi-securely.
A third thing here is that someone now needs to teach the end users. I’m a professional software engineer and I’ve seen many developers that have no clue how any of this works. They couldn’t configure an nginx without training and time.
Note that I’m not judging the idea of forced HTTPS either way, I’m just pointing out some of the obvious costs.
The DNS server used by the challenge doesn’t have to be on your LAN.
Basically to host a LAN accessible service on thing.home.example.com, I do:
Get an ACME cert by proving I control the domain with my usual hosted DNS provider
Have my LAN DNS return the LAN IP hosting thing.home.example.com
This is of course all automated in my setup. And I actually use a wildcard cert to not leak the list of services I host in the cert transparency logs. The security downside of a wildcard doesn’t matter to me in this case.
There’s a place for https and a place for non-encrypted http. The author is being very simplistic.
I really don’t care if someone’s blog is being MITM’d - it’s a blog - and I’d rather avoid the delays that come from setting-up and verifying the encrypted session. If the blog author cares, they can choose to use https.
Everything on my side of my firewall is under my control. htttps no thanks - way too much hassle for no benefit.
But something from a commercial website “out there” - of course.
Use the right tool for the job. Which means you’ve got to have multiple tools/options and understand the difference.
This one is about Comcast, but I’m in Belgium and Proximus (the ISP monopolist) is pulling the exact same shit.
HTTPS doesn’t just encrypt, more importantly it authenticates. And there is basically nothing on the web where I’m fine with MITMs messing with the content.
I do care! I don’t want scummy ISPs to inject ads into the blog I’m viewing.
Does the original author care, if they’re posting on Medium? Is the content free from popups/up-sells/surveillance? What did TLS accomplish for their content?
Anything that transits through an untrusted network should be encrypted. Including random blogs. Plain and simple Ciphered and simple.
There’s no reason to expose data that doesn’t need to be, it’s basic privacy.
Security is also impossible without encrypted connections.
On unprotected WiFi, anyone in close range can hijack your connections and potentially compromise you (RCE via Javascript for example).
This a real risk, especially in places like hotels and airports.
There’s no reason to expose data that doesn’t need to be, it’s basic privacy.
the reason given was set-up delays. other reasons would be potential outages in the certificate infrastructure, the labor required to set up and maintain certificates, and support for devices that can’t run modern ciphers.
doesn’t it depend on the content? are you wary of reading a book with the cover in view of a security camera?
your statements seem to be in line with the idea that encryption should be available and people should usually chose to use it. but I don’t see them as reasons for removing support for unencrypted HTTP.
That’s a false equivalency, IRL things are not trivially used to build a profile.
The only context that I think matters is if the data goes through untrusted networks, which is always the case for a website.
I do see those as reasons to remove HTTP: see paragraph 2 in this comment
That’s a false equivalency, IRL things are not trivially used to build a profile.
I’m surprised that you are so confident in this, but also confident that unencrypted HTTP traffic is used for such purposes without any user agreement. can you elaborate?
I do see those as reasons to remove HTTP: see paragraph 2 in this comment
so you are not advocating for HTTP to be disabled at the browser or OS level. are you just saying that websites should stop serving it?
Digital surveillance is so much more common because it’s easier, I’m not going to elaborate more, this seems pretty well documented online.
so you are not advocating for HTTP to be disabled at the browser or OS level.
No, I’m not advocating for what the article does, though I hope we can get there someday and would advocate for everyone to configure their own devices as such if they can.
are you just saying that websites should stop serving it?
Yes, I’m not saying anything more than what what in my original comment: “anything that transits through an untrusted network should be encrypted.”
Digital surveillance is so much more common because it’s easier, I’m not going to elaborate more, this seems pretty well documented online.
thanks for the clarification. I wasn’t sure if “not trivially used to build a profile” meant you thought it flat out didn’t happen. the fact that it’s less common makes a lot of sense, but it’s an interesting feature of (what seems to be) your threat model that the uncommonness means it can be ignored.
are you just saying that websites should stop serving it?
Yes, I’m not saying anything more than what what in my original comment: “anything that transits through an untrusted network should be encrypted.”
that technically is saying more to be fair. the original comment is compatible with a site serving both HTTP and HTTPS, and users only accessing the HTTP site if they trust the network; advocating for sites not to serve HTTP is a bit stronger.
your threat model that the uncommonness means it can be ignored
I don’t believe I said that.
you said “the only context that I think matters is if the data goes through untrusted networks,” implying that reading a book in front of a security camera is a context that doesn’t matter. is there some other reason besides the supposed uncommonness of organizations building profiles based on security footage?
the original comment is compatible with a site serving both HTTP and HTTPS
No, the internet is a bunch of untrusted networks in a trenchcoat.
I don’t follow. since we’re talking about what other people should do, how can we presume their trust model?
You can’t trust a black box that can change at any moment. Via BGP hijacks for instance.
I stopped replying last time because it honestly didn’t feel like you were engaging in good faith. I came back to the thread due to someone else replying to me lower.
I won’t reply further.
you definitely can’t derive your position from your original comment. the idea that traffic “should” be encrypted doesn’t imply that it should be enforced by servers but not (yet) by client software, or that networks that are on the Internet can’t be trusted or that HTTPS is the only reasonable way to encrypt traffic.
Anything that transits through an untrusted network should be encrypted
If you want to argue for universal security (and there is a case for it) then http/s is not the place to do the encrypting. There is plenty of network traffic more valuable than most websites, eg. remote login and DNS. If you’re arguing for universal security, it should be much lower in the network, maybe at the TCP/UDP level, or below that.
But I was also noting that not all traffic goes through untrusted networks, and to turn-on https-everywhere will be a problem for systems running on trusted networks.
I still think “right tool for the job”. It should be up to the owner and the user of a service when to enable that kind of security.
I’m not against having secure transports, but application level is the practical solution we have at the moment so we should use it.
Also, nowadays, HTTPS is basically that secure transport.
I’m explicitly not arguing the point the article makes. Just for untrusted networks, which is still the vast majority of most people’s connections given they are predominantly carried over the internet. I don’t care if you don’t want to encrypt your LAN, but do if you are one of the random HTTP blog owners.
Correct me if I’m wrong, but isn’t the reason for doing encryption at the application layer due to it being a lot easier to make changes at the application level than at transport or lower?
Anything that transits through an untrusted network should be encrypted.
Really, what one definitely requires is authentication, not necessarily confidentiality. I want to know that the resource I received is the one that the server sent.
Regardless, HTTPS is not the only way to achieve authentication or confidentiality. I could, for example, access a remote resource using Wireguard or IPsec.
I could’ve written “should be authenticated and encrypted” but I don’t think it’s useful to argue just for authentication.
Why would you risk sending sensitive data through an untrusted network by using only authentication? Especially because once you have cryptographic authentication, encryption is a relatively small extra step.
Sensitive data is also not uniform and the owner of the random blog can’t decide on their own if the content is sensitive. You never know what a government or abusive person will consider punishable, now or in the future. Abortion in the US is the obvious example.
IMO you owe it to your users to protect them to a reasonable degree, and encryption on a public network is a reasonable baseline.
An HTTPS-only world is a world ruled by a handful of completely centralized certificate authorities, where the only usable protocols are tied to a neverending treadmill of new cipher suites that consume ever more compute resources and rest on ever more gigantic codebases that are never properly audited; where older or low-power devices cannot communicate at all by design.
Vendors for commercial operating systems, ad-driven web browser ecosystems, mobile devices with intended lifespans measured in quarters, and security consulting services collectively salivate at this opportunity for inexorable planned obsolescence and complexity growth, and someday they will doubtless get their way. Don’t kid yourself for a moment that this will mark a joyous new era of consumer empowerment and peace; it will be another ratchet toward making the entire web a walled garden.
For apps, there is no requirement that TLS use the current centralized PKI. It’s fine if your server has a self-signed key and your app uses cert pinning, for example. I’ve used TLS to implement P2P apps.
Newer asymmetrical ciphers tend to be more efficient, viz. Curve25519 vs RSA. Symmetric ciphers getting more expensive is mostly a result of needing a higher level of security as CPU speeds increase. (And hardware acceleration of AES is pretty ubiquitous today.)
There are non-gigantic TLS implementations. BearSSL’s README says “a minimal server implementation may fit in about 20 kilobytes of compiled code and 25 kilobytes of RAM.”
Every low-power embedded CPU with IP support that I know of supports TLS and has it available in its standard library, even the ESP8266s in the light bulbs in my house.
Given that I see posts here about people building networked apps on MacOS 9 and OS/2, I’m not sure which “older devices” are locked out of TLS.
I see posts here about people building networked apps on MacOS 9 and OS/2, I’m not sure which “older devices” are locked out of TLS.
“Locked out” might be a strong term, but “very difficult” isn’t. These devices have the RAM and CPU to perform modern TLS, but they need modern software to implement it, which is not going to come from vendors. People - typically those on lobste.rs - end up with impressive workarounds like crypto ancienne which works by having Mozilla send TLS requests to an unencrypted local proxy (meaning the browser can’t do any integrity checks.) That in turn hits a bootstrapping problem, because users need to download the latest security suite somehow. Unencrypted connections work out of the box, but encrypted connections require users to constantly be on the hunt for this year’s best hack.
At some point the choice is between ensuring that network operators don’t know which specific blog pages my readers read, or allowing them to be read on any old system out of the box. Networks are going to know the user accessed my blog via DNS and IP.
Ok, but I am mystified by the importance applied here to retro systems people run as a hobby. It’s not like the people running them don’t have access to, say, a cheap PC.
That’s fine, but I think it’s Internet_Janitor’s point in a nutshell. If you’re okay buying a relatively new device that is required to access the Internet, fine; just note that it’s giving the vendors of that device a lot of leverage, because you won’t be able to participate in the global network without accepting the terms they impose on you.
The question is when it’s okay to impose such a requirement on others, even if those are terms you would willingly accept.
I have no idea what this means. Are the terms “you must use TLS”? How is that different than “you must use TCP”?
I mean things like “the latest version of our product requires you to sign-in with an email address which we will use to track your local computing activity in a personally identifiable way”, something which all big tech vendors are currently pushing but didn’t exist ~10 years back. What they push will change year to year, but they’ll always have something to push. The ability of users to resist these pushes is in direct proportion to how many alternative options they have.
not much “importance” is required to outweigh the supposed benefit of forcing encryption rather than just supporting it and using it by default.
most sites that people visit are for leisure anyway, so I guess the importance of hobbyist enjoyment would be ranked similar to the ability to access a site like lobsters.
there’s also the other side of the equation, where potentially important materials may be only accessible on the web via plain HTTP or FTP.
Asking users to decide whether to “trust” a site is both unrealistic and unfair, as most lack the technical background to make such judgments. And we can all fall for a scam.
I don’t get this, scammers can get certs just as easily as anyone else, or host on suborned https systems. There’s all sorts of mechanisms for flagging dangerous sites and it’s almost entirely orthogonal to how you connect to them.
I object to the use of the word “secure” in a lot of contexts. I know I’m kinda wrong about that, but I hold to it anyway.
I see “secure” as a description, but it is mostly used as an aspiration. For example, my kid’s school has a sign on the door saying “this is a secure building, call the office to be let in”. I know what they mean when they say that, but does having to call the office to be let in make the building secure? I don’t really think so. (And of course, getting in without calling the office isn’t exactly difficult anyway.)
Similarly, browsers saying giving you a little padlock icon to contrast with “not secure” kinda implies “this is a secure website”. Sure, if you click the lock, it’ll say “connection secure” rather than “website secure” but that’s a fairly subtle distinction that I know that a lot of users don’t realize at all. “Yes, I gave them my personal info, but its ok because it had that lock icon” is something I heard from someone irl not long ago at all. Just because the https cert verifies you’re talking to thieves dot com doesn’t mean it is a good idea to volunteer information to thieves dot com.
Even this blog says things like “Most relied on SSL solely to secure usernames and passwords during login”, which makes enough sense when you’re the website author, but if you are the user, is your password secure due to SSL? Probably not by any meaningful definition, you might be on the wrong website and giving it away to a phisher, you might be on the right website but they store it incorrectly on their end, etc, etc, etc.
And yeah, I know, my definition of “secure” as a description is pretty much impossible to verify in real life. But I don’t think https even comes close.
Does that mean you just give up? Eh, I think https is overrated and am not a fan of “https everywhere” in the slightest - I’d be fairly cross if browsers decided to just ban regular http. But do I think it is useless? Of course not. But what’s next for the lock icon? Is it possible for it to ever really be something reliable to the untrained user? I’m kinda skeptical. I kinda feel like the browser randomly asking “what website do you think you’re on?” and then when the user says “paypal”, the browser can say “actually you’re talking to liars dot com”. But without some kind of verification of user intent, all these certificate ownership signatures I fear are giving a false sense of security which can easily be more harmful than nothing.
Sounds great, except I don’t want my browser to stop me from connecting to devices inside my threat perimeter. It’s a major fucking pain in the ass to connect to older hardware on my own fucking network. No one seems to get that.
I think the idea is that there will be fewer glitches, since the very short time period will force automation (and you’ll learn very quickly if your automation breaks). If a site lasts more than a week, then, it’s likely to last indefinitely.
I did this for a short period of time. And then I set up automation but at some point something changed that broke my automation but it was only one every 90 days so it took me more than a year to actually sit down and fix it properly.
I don’t see what OSes have anything to do with this.
Forcing HTTPS == allowing a few large corporations to decide who is and isn’t blessed to communicate over encrypted lines.
This article feels like the author just had an IRL fight with someone on the topic and needed to vent. I don’t buy nor even really follow the arguments.
Well, just no - this won’t fly, and the list of reasons is too long to write here. That said, there is a use case for serving authenticated content over plain HTTP - where the signature and metdata can be conveyed through a standardized HTTP header. This way, a lot of the content standardization bodies like W3C etc. won’t break.
No. Hard no. HTTPS requires CAs. I do not and should not require a third party’s assent in order to access one of my machines from another of my machines.
20% of page loads is a lot. The US numbers are meaningless in this discussion IMO, unless your worldview ignores the majority of the planet.
I’m a big fan of using HTTPS for everything, but just like ipv6, it’s a fools errand to think we can or should mandate it.
For context, browser usually don’t remove/deprecate features as long as they are above the 0.1% threshold.
Boblord used to (don’t know current situation) work at CISA. He is paid to not care for outside the US.
I don’t know where the author was during the late twenty-teens, but I recall the HTTPS revolution being quite noisy with the whole “ENCRYPT ALL THE THINGS!” being screamed from the roof tops. Also, does this mean I need HTTPS for connecting to 127/8 and 192.168/16 networks? I either need to expose the internal network or run my own CA (and if I’m correctly listening to “security experts” is something I should not do)?
I just hope that Apple doesn’t remove http support from Safari until they fix guest network sign in. Thanks to whoever runs neverssl.
That’s an issue on every OS. I used NeverSSL several times this week.
I know but since this month I both got a new job - with crappy WiFi in office - and a Mac to go with it - so I’m affected. I don’t use these networks at home usually :)
You can use ACME DNS validation (or expose only what’s needed for HTTP validation, only when needed, but that’s less easy).
That is still exposing my internal network.
How so? With ACME DNS-01 the only thing you need to expose is a DNS server. Which you can easily put outside your internal network.
Even it’s the only thing, it’s still open. Whereas now nothing is open. It may be low risk, but compared to 0, it’s a lot.
Another thing though is that someone needs to set that up. I did configure my own
bindinstance 20 years ago, personally and professionally, but most people don’t have a clue how to set up WiFi semi-securely.A third thing here is that someone now needs to teach the end users. I’m a professional software engineer and I’ve seen many developers that have no clue how any of this works. They couldn’t configure an nginx without training and time.
Note that I’m not judging the idea of forced HTTPS either way, I’m just pointing out some of the obvious costs.
The DNS server used by the challenge doesn’t have to be on your LAN.
Basically to host a LAN accessible service on
thing.home.example.com, I do:thing.home.example.comThis is of course all automated in my setup. And I actually use a wildcard cert to not leak the list of services I host in the cert transparency logs. The security downside of a wildcard doesn’t matter to me in this case.
There’s a place for https and a place for non-encrypted http. The author is being very simplistic.
I really don’t care if someone’s blog is being MITM’d - it’s a blog - and I’d rather avoid the delays that come from setting-up and verifying the encrypted session. If the blog author cares, they can choose to use https.
Everything on my side of my firewall is under my control. htttps no thanks - way too much hassle for no benefit.
But something from a commercial website “out there” - of course.
Use the right tool for the job. Which means you’ve got to have multiple tools/options and understand the difference.
I do care! I don’t want scummy ISPs to inject ads into the blog I’m viewing. Which is 100% a thing that happens to non-HTTPS traffic: https://www.privateinternetaccess.com/blog/comcast-still-uses-mitm-javascript-injection-serve-unwanted-ads-messages/
This one is about Comcast, but I’m in Belgium and Proximus (the ISP monopolist) is pulling the exact same shit.
HTTPS doesn’t just encrypt, more importantly it authenticates. And there is basically nothing on the web where I’m fine with MITMs messing with the content.
Does the original author care, if they’re posting on Medium? Is the content free from popups/up-sells/surveillance? What did TLS accomplish for their content?
Anything that transits through an untrusted network should be encrypted. Including random blogs.
Plain and simpleCiphered and simple.There’s no reason to expose data that doesn’t need to be, it’s basic privacy.
Security is also impossible without encrypted connections.
On unprotected WiFi, anyone in close range can hijack your connections and potentially compromise you (RCE via Javascript for example).
This a real risk, especially in places like hotels and airports.
the reason given was set-up delays. other reasons would be potential outages in the certificate infrastructure, the labor required to set up and maintain certificates, and support for devices that can’t run modern ciphers.
Let me rephrase that: from a privacy standpoint, you should always encrypt everything.
I don’t deny there’s cost associated with encryption but I believe the privacy and security costs are orders of magnitude more important.
doesn’t it depend on the content? are you wary of reading a book with the cover in view of a security camera?
your statements seem to be in line with the idea that encryption should be available and people should usually chose to use it. but I don’t see them as reasons for removing support for unencrypted HTTP.
That’s a false equivalency, IRL things are not trivially used to build a profile.
The only context that I think matters is if the data goes through untrusted networks, which is always the case for a website.
I do see those as reasons to remove HTTP: see paragraph 2 in this comment
I’m surprised that you are so confident in this, but also confident that unencrypted HTTP traffic is used for such purposes without any user agreement. can you elaborate?
so you are not advocating for HTTP to be disabled at the browser or OS level. are you just saying that websites should stop serving it?
Digital surveillance is so much more common because it’s easier, I’m not going to elaborate more, this seems pretty well documented online.
No, I’m not advocating for what the article does, though I hope we can get there someday and would advocate for everyone to configure their own devices as such if they can.
Yes, I’m not saying anything more than what what in my original comment: “anything that transits through an untrusted network should be encrypted.”
thanks for the clarification. I wasn’t sure if “not trivially used to build a profile” meant you thought it flat out didn’t happen. the fact that it’s less common makes a lot of sense, but it’s an interesting feature of (what seems to be) your threat model that the uncommonness means it can be ignored.
that technically is saying more to be fair. the original comment is compatible with a site serving both HTTP and HTTPS, and users only accessing the HTTP site if they trust the network; advocating for sites not to serve HTTP is a bit stronger.
I don’t believe I said that.
No, the internet is a bunch of untrusted networks in a trenchcoat.
you said “the only context that I think matters is if the data goes through untrusted networks,” implying that reading a book in front of a security camera is a context that doesn’t matter. is there some other reason besides the supposed uncommonness of organizations building profiles based on security footage?
I don’t follow. since we’re talking about what other people should do, how can we presume their trust model?
You can’t trust a black box that can change at any moment. Via BGP hijacks for instance.
I stopped replying last time because it honestly didn’t feel like you were engaging in good faith. I came back to the thread due to someone else replying to me lower.
I won’t reply further.
you definitely can’t derive your position from your original comment. the idea that traffic “should” be encrypted doesn’t imply that it should be enforced by servers but not (yet) by client software, or that networks that are on the Internet can’t be trusted or that HTTPS is the only reasonable way to encrypt traffic.
If you want to argue for universal security (and there is a case for it) then http/s is not the place to do the encrypting. There is plenty of network traffic more valuable than most websites, eg. remote login and DNS. If you’re arguing for universal security, it should be much lower in the network, maybe at the TCP/UDP level, or below that.
But I was also noting that not all traffic goes through untrusted networks, and to turn-on https-everywhere will be a problem for systems running on trusted networks.
I still think “right tool for the job”. It should be up to the owner and the user of a service when to enable that kind of security.
I’m not against having secure transports, but application level is the practical solution we have at the moment so we should use it.
Also, nowadays, HTTPS is basically that secure transport.
I’m explicitly not arguing the point the article makes. Just for untrusted networks, which is still the vast majority of most people’s connections given they are predominantly carried over the internet. I don’t care if you don’t want to encrypt your LAN, but do if you are one of the random HTTP blog owners.
Correct me if I’m wrong, but isn’t the reason for doing encryption at the application layer due to it being a lot easier to make changes at the application level than at transport or lower?
Really, what one definitely requires is authentication, not necessarily confidentiality. I want to know that the resource I received is the one that the server sent.
Regardless, HTTPS is not the only way to achieve authentication or confidentiality. I could, for example, access a remote resource using Wireguard or IPsec.
I could’ve written “should be authenticated and encrypted” but I don’t think it’s useful to argue just for authentication.
Why would you risk sending sensitive data through an untrusted network by using only authentication? Especially because once you have cryptographic authentication, encryption is a relatively small extra step.
Sensitive data is also not uniform and the owner of the random blog can’t decide on their own if the content is sensitive. You never know what a government or abusive person will consider punishable, now or in the future. Abortion in the US is the obvious example.
IMO you owe it to your users to protect them to a reasonable degree, and encryption on a public network is a reasonable baseline.
That’s not an argument to not encrypt, and I didn’t say that.
https://lobste.rs/s/oxkl4d/open_letter_browser_os_makers#c_ik2bnp
Bullshit.
An HTTPS-only world is a world ruled by a handful of completely centralized certificate authorities, where the only usable protocols are tied to a neverending treadmill of new cipher suites that consume ever more compute resources and rest on ever more gigantic codebases that are never properly audited; where older or low-power devices cannot communicate at all by design.
Vendors for commercial operating systems, ad-driven web browser ecosystems, mobile devices with intended lifespans measured in quarters, and security consulting services collectively salivate at this opportunity for inexorable planned obsolescence and complexity growth, and someday they will doubtless get their way. Don’t kid yourself for a moment that this will mark a joyous new era of consumer empowerment and peace; it will be another ratchet toward making the entire web a walled garden.
Bullshit.
For apps, there is no requirement that TLS use the current centralized PKI. It’s fine if your server has a self-signed key and your app uses cert pinning, for example. I’ve used TLS to implement P2P apps.
Newer asymmetrical ciphers tend to be more efficient, viz. Curve25519 vs RSA. Symmetric ciphers getting more expensive is mostly a result of needing a higher level of security as CPU speeds increase. (And hardware acceleration of AES is pretty ubiquitous today.)
There are non-gigantic TLS implementations. BearSSL’s README says “a minimal server implementation may fit in about 20 kilobytes of compiled code and 25 kilobytes of RAM.”
Every low-power embedded CPU with IP support that I know of supports TLS and has it available in its standard library, even the ESP8266s in the light bulbs in my house.
Given that I see posts here about people building networked apps on MacOS 9 and OS/2, I’m not sure which “older devices” are locked out of TLS.
“Locked out” might be a strong term, but “very difficult” isn’t. These devices have the RAM and CPU to perform modern TLS, but they need modern software to implement it, which is not going to come from vendors. People - typically those on lobste.rs - end up with impressive workarounds like crypto ancienne which works by having Mozilla send TLS requests to an unencrypted local proxy (meaning the browser can’t do any integrity checks.) That in turn hits a bootstrapping problem, because users need to download the latest security suite somehow. Unencrypted connections work out of the box, but encrypted connections require users to constantly be on the hunt for this year’s best hack.
At some point the choice is between ensuring that network operators don’t know which specific blog pages my readers read, or allowing them to be read on any old system out of the box. Networks are going to know the user accessed my blog via DNS and IP.
Ok, but I am mystified by the importance applied here to retro systems people run as a hobby. It’s not like the people running them don’t have access to, say, a cheap PC.
That’s fine, but I think it’s Internet_Janitor’s point in a nutshell. If you’re okay buying a relatively new device that is required to access the Internet, fine; just note that it’s giving the vendors of that device a lot of leverage, because you won’t be able to participate in the global network without accepting the terms they impose on you.
The question is when it’s okay to impose such a requirement on others, even if those are terms you would willingly accept.
How “relatively new” a device is required to make a TLS connection? 20 years?
I have no idea what this means. Are the terms “you must use TLS”? How is that different than “you must use TCP”?
my symbian phones from ~2009 and BB10 phones from ~2013 have TLS but can’t connect to most HTTPS-only sites.
I mean things like “the latest version of our product requires you to sign-in with an email address which we will use to track your local computing activity in a personally identifiable way”, something which all big tech vendors are currently pushing but didn’t exist ~10 years back. What they push will change year to year, but they’ll always have something to push. The ability of users to resist these pushes is in direct proportion to how many alternative options they have.
not much “importance” is required to outweigh the supposed benefit of forcing encryption rather than just supporting it and using it by default.
most sites that people visit are for leisure anyway, so I guess the importance of hobbyist enjoyment would be ranked similar to the ability to access a site like lobsters.
there’s also the other side of the equation, where potentially important materials may be only accessible on the web via plain HTTP or FTP.
THIS! TY VERY MUCH. I wish I could give awards or upvote 100 times.
This would be great if the entire free-as-in-beer internet didn’t rely on a single company (Let’s Encrypt) for its entire cryptography ecosystem.
Also don’t do such events on holidays, only Perl/Raku developers should be so masochistic.
I don’t get this, scammers can get certs just as easily as anyone else, or host on suborned https systems. There’s all sorts of mechanisms for flagging dangerous sites and it’s almost entirely orthogonal to how you connect to them.
I object to the use of the word “secure” in a lot of contexts. I know I’m kinda wrong about that, but I hold to it anyway.
I see “secure” as a description, but it is mostly used as an aspiration. For example, my kid’s school has a sign on the door saying “this is a secure building, call the office to be let in”. I know what they mean when they say that, but does having to call the office to be let in make the building secure? I don’t really think so. (And of course, getting in without calling the office isn’t exactly difficult anyway.)
Similarly, browsers saying giving you a little padlock icon to contrast with “not secure” kinda implies “this is a secure website”. Sure, if you click the lock, it’ll say “connection secure” rather than “website secure” but that’s a fairly subtle distinction that I know that a lot of users don’t realize at all. “Yes, I gave them my personal info, but its ok because it had that lock icon” is something I heard from someone irl not long ago at all. Just because the https cert verifies you’re talking to thieves dot com doesn’t mean it is a good idea to volunteer information to thieves dot com.
Even this blog says things like “Most relied on SSL solely to secure usernames and passwords during login”, which makes enough sense when you’re the website author, but if you are the user, is your password secure due to SSL? Probably not by any meaningful definition, you might be on the wrong website and giving it away to a phisher, you might be on the right website but they store it incorrectly on their end, etc, etc, etc.
And yeah, I know, my definition of “secure” as a description is pretty much impossible to verify in real life. But I don’t think https even comes close.
Does that mean you just give up? Eh, I think https is overrated and am not a fan of “https everywhere” in the slightest - I’d be fairly cross if browsers decided to just ban regular http. But do I think it is useless? Of course not. But what’s next for the lock icon? Is it possible for it to ever really be something reliable to the untrained user? I’m kinda skeptical. I kinda feel like the browser randomly asking “what website do you think you’re on?” and then when the user says “paypal”, the browser can say “actually you’re talking to liars dot com”. But without some kind of verification of user intent, all these certificate ownership signatures I fear are giving a false sense of security which can easily be more harmful than nothing.
Sounds great, except I don’t want my browser to stop me from connecting to devices inside my threat perimeter. It’s a major fucking pain in the ass to connect to older hardware on my own fucking network. No one seems to get that.
A good argument in favor of supporting https is NOT an argument for banning http.
For instance, I use https all the time as a web developer who tests out servers before putting them behind SSL.
I don’t think anyone wants to ban localhost :)
When I read that LE was going to six-day certs, my first thought was that sites would go back to http:// .
(you can argue that they shouldn’t, I’m predicting that at the first or second glitch they will)
I think the idea is that there will be fewer glitches, since the very short time period will force automation (and you’ll learn very quickly if your automation breaks). If a site lasts more than a week, then, it’s likely to last indefinitely.
You mean there are people who are updating manually every 90 days?
I did this for a short period of time. And then I set up automation but at some point something changed that broke my automation but it was only one every 90 days so it took me more than a year to actually sit down and fix it properly.
Yes
This article feels like the author just had an IRL fight with someone on the topic and needed to vent. I don’t buy nor even really follow the arguments.
Well, just no - this won’t fly, and the list of reasons is too long to write here. That said, there is a use case for serving authenticated content over plain HTTP - where the signature and metdata can be conveyed through a standardized HTTP header. This way, a lot of the content standardization bodies like W3C etc. won’t break.
Remember https://www.w3.org/Provider/Style/URI - it includes the scheme component.
Update: forgot that https://datatracker.ietf.org/doc/rfc9421/ is now a thing.
No. Hard no. HTTPS requires CAs. I do not and should not require a third party’s assent in order to access one of my machines from another of my machines.
“Open letter”. Just a blog post from one person.
That’s what open letters are nowadays.
I suppose a “blog post” is for your readers. An open letter is directed to specific targets.