Heh, I guess that’s what it’s a peeve; in practical use cases - GMT & UTC tend to be the same. But careful users of the time zones would lean to use UTC when they want that “U” part.
i do not recommend using GMT in communication. some people will assume it means “time in London”, which can cause problems when summer-time etc… saying “UTC” avoids this problem.
When someone say whatever time GMT, EST, CST, PST, MST.. they always mean whatever that time happen where “i” am and the person on the other side need to figure out what time it actually is..
This coming from someone that work with support globally (mostly Europe and America continents tough) and has to deal with this on a daily basis, basically i need to track regular\daylight almost everywhere to try figure out when my meetings are..
I started using timeanddate event announcer links annoyingly frequently in hopes that I can get other people to use them instead of just the timezone letters.
This is my experience as well and it is frustrating. I always write ET, CT, MT, and PT for the main US timezones. It’s viewer letters. Why do folks stick the S in there? Nobody seems to ever stick the D in there!
4 years… really doesn’t seem like a long time. It’s a bit worrying to me that the group which controls whether people can access the web or not really think that “people whose devices haven’t been updated in 4 years can’t access the web” is acceptable.
This has been the path all along. It’s the same people who deprecate TLSv1.0 because “not secure”, and which earlier have deprecated HTTP also because “no secure”.
It’s especially hypocritical given how many perfectly capable hardware devices (with gigabytes memory) are being deprecated by these policies. Note how Amazon.com, Google Search and Mozilla.org all themselves still work in old browsers, yet your blog with pictures of your cat is not allowed to be served over HTTP or even TLSv1.0 because not secure.
There’s only one way here: don’t participate in these planned obsolescence experiments. If you own any personal websites, make sure they don’t support HTTPS. If they do have to support HTTPS, make sure TLSv1.0 is still enabled and HTTP doesn’t redirect to HTTPS. The best way is to simply not support HTTPS at all, because otherwise anyone who clicks on any links with an “unsupported” browser will simply get an error message, and likely won’t know that HTTP is still available.
Note the way we got here is the Snowden leaks effectively showed that if an adversary can control content that your browser receives, the adversary owns your device. The lesson the industry took from this is that encryption of every cat picture is needed to ensure adversaries can’t tamper with it. The lesson I took from it is browsers today are too complex to be a robust sandbox. Somehow the industry view is this is clients are so unfixable that the only pragmatic solution is to encrypt in transit, which neatly ignores that if an adversary controls any endpoint, the problem is still there. Encryption can only secure your client if you trust the server.
It also seems odd that the whole reason for a browser to exist is to implement a sandbox. If it can’t do that, why not download binary code and execute it from every site that you visit?
So yeah, my site is http-only, even though I know that makes it easier for adversaries to tamper with content and take over clients. But was adding encryption to my site really going to prevent that?
So yeah, my site is http-only, even though I know that makes it easier for adversaries to tamper with content and take over clients. But was adding encryption to my site really going to prevent that?
https://grugq.github.io/presentations/COMSEC%20beyond%20encryption.pdf might help you mentally model things. Regardless, encrypting http helps both ends reduce the set of adversaries, even if that is self-signed. SSL MiM is not something the kids are taught in school and MiMproxy is quirky.
Not encrypting HTTP opens up for the uglier tiers of ad-tech and everyone is a target for those. You need to be somewhat more “interesting” for national interests to spend analyst/tailored access budget on you or your site visitors.
This is a very wrong threat analysis, and suggesting the use of self-signed certificates is naive at best. What we were supposed to have is opportunistic encryption, but because politics, the whole thing was shelved.
HTTPS and SSL also open up a whole extra can of worms other than mere compatibility issues galore. It opens up a huge extra attack vector. Without HTTPS, the only way to intercept the traffic is if you control the connection between the server and the client. With HTTPS, it’s possible to intercept traffic without such a requirement (e.g., Heartbleed). How’s that more secure for your cat blog if anyone can see what anyone else across the world is reading on your blog?
I’m not sure what you mean by automatic updates, but ~every Linux distro has a ca-certificates package which gets updated all the time. (And even when it’s EOL, you can use it from newer versions since it doesn’t have dependencies)
But as stated in the official LE post, you can (and probably should) add the root cert to your app-local trust storage (and probably even use cert pinning). That won’t help with browsers, though I’d guess you just install firefox for android (or something like that) and they ship their complete own TLS stack (and certs). Because you can’t run any relevant TLS on android 4.4, which you want to support for many apps..
FYI: you can still order stuff from Amazon and use Google Search on Android 4.4. Mozilla’s own website works, too, even though they suggest yours shouldn’t.
If your own website, and your employer’s website, don’t work on Android 4.4 because TLSv1.0 iNsEcUrE — it sounds like you’ve been sold the snake oil!
Pet peeve - this
personutility saying that GMT and UTC are the same. They are not.Edit author seems to know the difference, OpenSSL X509 does not.
Heh, I guess that’s what it’s a peeve; in practical use cases - GMT & UTC tend to be the same. But careful users of the time zones would lean to use UTC when they want that “U” part.
i do not recommend using GMT in communication. some people will assume it means “time in London”, which can cause problems when summer-time etc… saying “UTC” avoids this problem.
Top tip: pretend you’re a pilot/sailor and use “Zulu”
My experience is that people in Britain say “GMT” to mean “whatever time it is in Britain right now” - never mind if it’s GMT (UTC+0) or BST (UTC+1).
In my experience that is true everywhere.
When someone say whatever time GMT, EST, CST, PST, MST.. they always mean whatever that time happen where “i” am and the person on the other side need to figure out what time it actually is..
This coming from someone that work with support globally (mostly Europe and America continents tough) and has to deal with this on a daily basis, basically i need to track regular\daylight almost everywhere to try figure out when my meetings are..
I started using timeanddate event announcer links annoyingly frequently in hopes that I can get other people to use them instead of just the timezone letters.
This is my experience as well and it is frustrating. I always write ET, CT, MT, and PT for the main US timezones. It’s viewer letters. Why do folks stick the S in there? Nobody seems to ever stick the D in there!
tldr: There’s a list at the end of the article saying which clients will not support new CA. If those matter to you, you’re in trouble.
Keep everything up to date and renew certificates.
4 years… really doesn’t seem like a long time. It’s a bit worrying to me that the group which controls whether people can access the web or not really think that “people whose devices haven’t been updated in 4 years can’t access the web” is acceptable.
This has been the path all along. It’s the same people who deprecate TLSv1.0 because “not secure”, and which earlier have deprecated HTTP also because “no secure”.
It’s especially hypocritical given how many perfectly capable hardware devices (with gigabytes memory) are being deprecated by these policies. Note how Amazon.com, Google Search and Mozilla.org all themselves still work in old browsers, yet your blog with pictures of your cat is not allowed to be served over HTTP or even TLSv1.0 because not secure.
There’s only one way here: don’t participate in these planned obsolescence experiments. If you own any personal websites, make sure they don’t support HTTPS. If they do have to support HTTPS, make sure TLSv1.0 is still enabled and HTTP doesn’t redirect to HTTPS. The best way is to simply not support HTTPS at all, because otherwise anyone who clicks on any links with an “unsupported” browser will simply get an error message, and likely won’t know that HTTP is still available.
Note the way we got here is the Snowden leaks effectively showed that if an adversary can control content that your browser receives, the adversary owns your device. The lesson the industry took from this is that encryption of every cat picture is needed to ensure adversaries can’t tamper with it. The lesson I took from it is browsers today are too complex to be a robust sandbox. Somehow the industry view is this is clients are so unfixable that the only pragmatic solution is to encrypt in transit, which neatly ignores that if an adversary controls any endpoint, the problem is still there. Encryption can only secure your client if you trust the server.
It also seems odd that the whole reason for a browser to exist is to implement a sandbox. If it can’t do that, why not download binary code and execute it from every site that you visit?
So yeah, my site is http-only, even though I know that makes it easier for adversaries to tamper with content and take over clients. But was adding encryption to my site really going to prevent that?
https://grugq.github.io/presentations/COMSEC%20beyond%20encryption.pdf might help you mentally model things. Regardless, encrypting http helps both ends reduce the set of adversaries, even if that is self-signed. SSL MiM is not something the kids are taught in school and MiMproxy is quirky.
Not encrypting HTTP opens up for the uglier tiers of ad-tech and everyone is a target for those. You need to be somewhat more “interesting” for national interests to spend analyst/tailored access budget on you or your site visitors.
This is a very wrong threat analysis, and suggesting the use of self-signed certificates is naive at best. What we were supposed to have is opportunistic encryption, but because politics, the whole thing was shelved.
HTTPS and SSL also open up a whole extra can of worms other than mere compatibility issues galore. It opens up a huge extra attack vector. Without HTTPS, the only way to intercept the traffic is if you control the connection between the server and the client. With HTTPS, it’s possible to intercept traffic without such a requirement (e.g., Heartbleed). How’s that more secure for your cat blog if anyone can see what anyone else across the world is reading on your blog?
Extra info for Erlang / Elixir folks: https://elixirforum.com/t/psa-preventing-outages-due-to-dst-root-ca-expiry-on-sep-30th/42247
Congratulations Let’s Encrypt. I remember when this was first starting out, good times.
Wait, Windows has had automatic root updates since XP? And nobody else does? Why is everything worse than Windows XP?!
I’m not sure what you mean by automatic updates, but ~every Linux distro has a ca-certificates package which gets updated all the time. (And even when it’s EOL, you can use it from newer versions since it doesn’t have dependencies)
Except for Android, I guess, which doesn’t get them updated until you get a whole new image from the manufacturer.
But as stated in the official LE post, you can (and probably should) add the root cert to your app-local trust storage (and probably even use cert pinning). That won’t help with browsers, though I’d guess you just install firefox for android (or something like that) and they ship their complete own TLS stack (and certs). Because you can’t run any relevant TLS on android 4.4, which you want to support for many apps..
FYI: you can still order stuff from Amazon and use Google Search on Android 4.4. Mozilla’s own website works, too, even though they suggest yours shouldn’t.
If your own website, and your employer’s website, don’t work on Android 4.4 because TLSv1.0 iNsEcUrE — it sounds like you’ve been sold the snake oil!
[Comment removed by author]
Are you saying Amazon, Apple, Google Search and Mozilla are iNsEcUrE? What’s better than the empirical proof?
Windows has automatic root updates?