Look I’m not trying to be that person but I’m really not seeing why this is worth the effort. In a scenario where certificates were valid for 5 years and CRLs/OCSPs didn’t really work that well, sure, 90 days. It encouraged everyone to switch from manually purchasing and inserting certifications to getting them automatically generated, which is great. The 90 days felt a little security theater but ok fine I get it.
This doesn’t seem like it provides any security value but will become something that becomes an expectation with mindless security audits. If my process for generating and installing the certificate is flawed (CI/CD workflow, server is compromised, etc) this doesn’t do much. But it does mean if something does go wrong with the automated process to generate the new certificate I don’t have a lot of time to fix it.
All this does is create another constantly running process where I need to hit an external API maintained by mostly volunteers and charitable donations and if that goes wrong or they have a serious problem then the amount of time I have to either wait or go get another solution drops from maybe a month to “oh well Sarah does the SSL stuff and she’s on vacation”. It’s like if a Debian package mirror became business critical.
I was never really convinced every single website on the internet needs SSL to begin with, much less an SSL certificate that needs to rotate on a regular basis, but fine that’s where we ended up and it mostly works. But this just seems like it further raises the bar on everyone to constantly be swapping out certificates (or provide some justification on why they aren’t) even if the actual security threat from having these certificates be 90 days doesn’t seem to have materialized at all.
At work we have tasks that have 24 hour response times, week response times, and 90 day closure times, this essentially upgrades an SSL issue from a “we’ll get to that some day” to a page. I’m not sure if I like it from a process point of view.
A 1 week response time with a 30 day closure SLO sounds good to me. That way the weekly system changes that happen get caught right away and prioritized, but I don’t have to wake anyone up in the middle of the night to roll back and make the next rollout twice as big.
I think you misunderstood the comment you reply to? It’s arguing against short lifetimes. Because I can’t think of an “outside the box” where having short certificate lifetimes is really important and the current state of 90 days is not tolerable? (EDIT: now seeing your other comments in this discussion it’s clear that’s not what you meant, so yes, misunderstanding clearly)
I was in the same boat. But then I started to distribute source code, binaries and Greasemonkey/Tampermonkey plugins and it became clear that SSL is no longer just a superfluous luxury. I have to say, Let’s Encrypt works very well. I haven’t even touched it for years and it just seems to work.
HTTPS-everywhere is about removing MITM entry points, not about protecting contents of individual websites.
People with an insecure “just a blog” don’t understand their URLs are an attack vector by merely existing, due to disabling network security in the browsers, and their site doesn’t matter.
I’m now using Let’s Encrypt via Apache’s mod_md and yes, it works flawlessly (when I don’t make typos). I just question why rotate certificates so quickly? Why not daily? Hourly?
I would imagine that this makes you susceptible to outages when a downstream service is down. But a downstream service being down for 6 days is really unlikely. Maybe?
Every website benefits from the enhanced privacy and security of HTTPS. Malicious ISPs/governments can’t modify/filter the content of your blog on the fly, inject malicious tracking/crypto mining scripts, etc. Same of some black hat in a coffee shop on a shared WiFi network. I totally understand (and even admire) that gut impulse to forego complexity when it doesn’t provide value, but I think HTTPS is always worth the complexity.
I can buy the ISP angle, but not the government. Can’t a government just force their CA onto browsers (“If you want to distribute your browser in our country, Mr. Google, you must support our CA”) and MITM anyway?
Sure, but an enterprising user can still acquire a browser that hasn’t been backdoored by their government, and access HTTPS sites safely. There’s no such recourse if the site doesn’t support HTTPS to begin with.
In practice not really. It can be detected and shut down (basically preventing access alltogether) and it can be made illegal (similar to how VPNs are illegal in China already) and come with a high risk.
The only thing this does is to ensure that if you get a connection, you can be sure that only those in control of the certificates in your browser (version) can tamper with the connection, and everyone else can “just” shut it down. Which is a significant advantage indeed.
I think for the sake of net neutrality it’s really important for ISPs and governments to all not be CAs, but obviously yeah I’m not in charge and it’s not a perfect system. Still, doing what you can to ensure that your independent corner of the web is reaching browsers without tampering is a good thing imo.
One more reason why I have started to push against these type of “security measures”. People forget that “availability” is part of security, and it’s just a matter of time until we see this being abused.
E.g., as you say, someone manages to sabotage the process of how new certificates are generated - what happens then? And imagine that happens on Let’s Encrypt’s side. And now think of browsers that don’t even allow you to bypass that check anymore. Scary.
tl;dr by massively increasing the number of certs, we make it harder for people to check certs for irregularities via certificate transparency, and there’s a point where we should wonder if even shorter cert times are worth the downsides.
The obvious end stage of this trend is to have a centralized authority that verifies every transaction independently, which will:
create a SPOF for the entire Net
enable per-interaction tracking
enable per-site censorship at a scale never before seen
increase the cost of every interaction
I imagine CloudFlare, Google and Amazon are all salivating. I’m not.
(And it was predicted by Daniel Keys Moran in his 1989 novel The Long Run – which ended up being about enabling a revolution by attacking the global key verification infrastructure.)
Markets with fewer than seven suppliers are unstable and prone to collusion and corruption. Let’s Encrypt is now too big. It would serve the public interest more to split it into a dozen competing/cooperating structures.
The ACME protocol allows for specifying a desired (server clamped) expiry date. I’ve used this with non LE ACME servers to get shorter expiry. I’m hoping this news means LE will embrace this feature, not that they will just make a new endpoint that does 6 day certificates.
I find out I messed up renewal on Monday morning when I read the email from cron telling me how it went. Super excited to upgrade that chore from once every few weeks to a few times every week.
Put it in the subject line. Previous cert validity, X. New cert validity, Y. Or just X - > Y
You won’t have to open the email anymore. You’ll know if it succeeded or failed. And you’ll know if the expiry is coming up close enough for you to care.
This can be done by renewing sooner rather than having the full lifecycle shorter.
Basically right now I have a 90d cert which effectively has two phases:
60d up to date and in use
30d imminent renewal or something is wrong.
I don’t really care about that 60d period, but I do care about that 30d period as that allows me to deal with the problem when I get back from vacation rather than now.
So 31d certificates renewed faulty would be fine by me if they were renewed daily. This way I will find out if I messed up renewal in a day.
In my dev environment, I renew my certificates once a week. If they expired after 30 days, then I would start getting errors after 30 days. This would allow me to have my production life cycle be longer, thus allowing me to fix the problem in dev before it impacts my prod.
I expect my certificates to be renewed every week, but maybe I’m busy and I don’t check that week.
I’m not sure if I understand - they are saying that by swapping to shorter-lived certs, the window of exposure is reduced, correct? Or 6 days certs somehow helpful in a reactive fashion?
It is still a year for all other commercial CAs (yes, they are still in business). Google is trying to lower it to 90 days via the CA/B Forum, but so far it hasn’t succeeded.
There’s a desire to reduce all certificate durations to 45 days which is receiving quite some pushback over on the Github discussion (a lot of people working in environments where renewing certificates that often is just not realistic).
Let’s Encrypt is great but I have never been convinced that a regular person’s private webpage needs all the complexity and overhead of SSL.
HTTPS works both ways—both what you send is protected, but also what you receive. Therefore malicious actors cannot replace the web page content with something else (from ad injection to malicious scripts.)
I wouldn’t want anyone’s ISP to inject ads into my website. Spare the annoyance of seeing ads, they’d ruin my layout!
It’s not just ISPs. Any public access point should be assumed to be malicious. Without HTTPs, I can set up an AP called _Free_WiFi and people will connect. I can rewrite any HTTP response to inject a payload that contains an exploit for the browser version that the user-agent string advertises. If a visitor to your site is running a browser that has a known vulnerability (take a look at the list of Chrome CVEs. They average more than one every two days, so the odds are that anyone who isn’t autoupdating is vulnerable) then you’ve given anyone who sets up a free hotspot the ability to compromise visitors to your site by not deploying HTTPS.
Not the entire Internet, just all of the bits between you and the client. Without authenticated encryption, you have to assume that every hop between the client device and your server is trustworthy. In any case where that is not true, at least one end is likely to be vulnerable to things that violate confidentiality or integrity. Closer to the client is usually easier.
When you put it like that, you make it sound eminently reasonable. You’re relying on completely unknown, ever-changing third parties in large quantity, whose interests are opaque and their own security unauditable by you. Yes, defaulting to safer sex is probably the go.
One thing may not be obvious from your argument, so I want to point it out explicitly: Even if your website has HTTPS, a man in the middle can easily downgrade it to HTTP by terminating the SSL themselves. So the browsers have no option other than mistrusting every HTTP site since they can’t know whether you actually intended your site to be HTTP and that is why we need HTTPS everywhere.
SSL didn’t break that… NetZero and other similar providers used to display ads on your screen, independent of the datastream of things you were browsing. They required a custom dial-up client to connect, and that client showed the ads. They even partnered with hardware makers for a while to make nearly-$0 PCs.
AIUI, the payoff just wasn’t enough to sustain the business, and there wasn’t a price between $0 and what most people were paying for dial-up that would both sustain the business and convince enough people to tolerate the ads. But the “free” ISPs I saw in the 1990s-early00s were not injecting ads onto pages. They were displayed on the PC around the pages.
The one time I’ve seen something like this happen is back when I had home internet with a monthly cap, the ISP would inject a banner at the top of whatever site you were browsing when you had used up 75% of the bandwidth.
in the grand scheme of things, I really don’t think SSL is that “complex”. You can make it way more complex than it needs to be, sure. But installing an ACME client, agreeing to the terms of service of the certificate authority you’re using, and putting it on a cronjob, is probably the least complicated part of setting up a web service.
But installing an ACME client, agreeing to the terms of service of the certificate authority you’re using, and putting it on a cronjob, is probably the least complicated part of setting up a web service.
Each time I’ve done it, it takes around 15-30 minutes to setup and automate such that I’ve basically never needed to “fix” anything in these setups. If you would like help setting it up correctly with an existing server, I’d be happy to help out. If you just find it too difficult to add to your existing http servers, you could always put your application behind a proxy with renewal built in, such as Caddy or Traefik.
What difference would it make where the server is hosted? Are you like tearing down the server every week and reinstalling everything manually over and over or something?
Fwiw my current iteration of my hetzner box is there for some five years approximately, I don’t think I’ve fiddled with the certbot thing not then once or twice after the initial setup in Cron. You install nginx, you add the cert client, done.
I am really surprised by this change, given the costs that are perfectly explained in this note. Is there, somewhere, some data about LE key compromises, and how they were exploited?
No, that part was clear, I think the question was about cases where this actually happened with Let’s Encrypt.
I know there are situations where the keys are compromised from the classic, commercial CAs, but I can’t remember reading about LE being compromised.
The question would be, why are LE people pushing for this? Did a problem happen and we didn’t know?
why are LE people pushing for this? Did a problem happen and we didn’t know?
Yeah that’s exactly what I wanted to ask. It seems to me that LE is going to a lot of trouble to get from 90 to 6 days, and the only justification is that it’s “more secure”, but I wonder why 90 days wouldn’t be considered “secure enough”…
Look I’m not trying to be that person but I’m really not seeing why this is worth the effort. In a scenario where certificates were valid for 5 years and CRLs/OCSPs didn’t really work that well, sure, 90 days. It encouraged everyone to switch from manually purchasing and inserting certifications to getting them automatically generated, which is great. The 90 days felt a little security theater but ok fine I get it.
This doesn’t seem like it provides any security value but will become something that becomes an expectation with mindless security audits. If my process for generating and installing the certificate is flawed (CI/CD workflow, server is compromised, etc) this doesn’t do much. But it does mean if something does go wrong with the automated process to generate the new certificate I don’t have a lot of time to fix it.
All this does is create another constantly running process where I need to hit an external API maintained by mostly volunteers and charitable donations and if that goes wrong or they have a serious problem then the amount of time I have to either wait or go get another solution drops from maybe a month to “oh well Sarah does the SSL stuff and she’s on vacation”. It’s like if a Debian package mirror became business critical.
I was never really convinced every single website on the internet needs SSL to begin with, much less an SSL certificate that needs to rotate on a regular basis, but fine that’s where we ended up and it mostly works. But this just seems like it further raises the bar on everyone to constantly be swapping out certificates (or provide some justification on why they aren’t) even if the actual security threat from having these certificates be 90 days doesn’t seem to have materialized at all.
At work we have tasks that have 24 hour response times, week response times, and 90 day closure times, this essentially upgrades an SSL issue from a “we’ll get to that some day” to a page. I’m not sure if I like it from a process point of view.
A 1 week response time with a 30 day closure SLO sounds good to me. That way the weekly system changes that happen get caught right away and prioritized, but I don’t have to wake anyone up in the middle of the night to roll back and make the next rollout twice as big.
Sounds like “works on my machine”, just here it is “works for my work requirements / environment”. Good for you, but try to think outside of your box.
I think you misunderstood the comment you reply to? It’s arguing against short lifetimes. Because I can’t think of an “outside the box” where having short certificate lifetimes is really important and the current state of 90 days is not tolerable? (EDIT: now seeing your other comments in this discussion it’s clear that’s not what you meant, so yes, misunderstanding clearly)
You are right, should not reply before having a coffee…
I too, felt that SSL was overkill for the type of site I run (my own blog basically). One of my entries hit the Orange Site, and I swear, over half of the comments were about the lack of SSL on my site (and basically how that’s stupid and I was a bad person for not using SSL, sigh). What possible rationalization for a 6 day certificate, much less the 45 being pushed on us now?
I was in the same boat. But then I started to distribute source code, binaries and Greasemonkey/Tampermonkey plugins and it became clear that SSL is no longer just a superfluous luxury. I have to say, Let’s Encrypt works very well. I haven’t even touched it for years and it just seems to work.
HTTPS-everywhere is about removing MITM entry points, not about protecting contents of individual websites.
People with an insecure “just a blog” don’t understand their URLs are an attack vector by merely existing, due to disabling network security in the browsers, and their site doesn’t matter.
I’m now using Let’s Encrypt via Apache’s
mod_mdand yes, it works flawlessly (when I don’t make typos). I just question why rotate certificates so quickly? Why not daily? Hourly?I would imagine that this makes you susceptible to outages when a downstream service is down. But a downstream service being down for 6 days is really unlikely. Maybe?
Every website benefits from the enhanced privacy and security of HTTPS. Malicious ISPs/governments can’t modify/filter the content of your blog on the fly, inject malicious tracking/crypto mining scripts, etc. Same of some black hat in a coffee shop on a shared WiFi network. I totally understand (and even admire) that gut impulse to forego complexity when it doesn’t provide value, but I think HTTPS is always worth the complexity.
I can buy the ISP angle, but not the government. Can’t a government just force their CA onto browsers (“If you want to distribute your browser in our country, Mr. Google, you must support our CA”) and MITM anyway?
Sure, but an enterprising user can still acquire a browser that hasn’t been backdoored by their government, and access HTTPS sites safely. There’s no such recourse if the site doesn’t support HTTPS to begin with.
In practice not really. It can be detected and shut down (basically preventing access alltogether) and it can be made illegal (similar to how VPNs are illegal in China already) and come with a high risk.
The only thing this does is to ensure that if you get a connection, you can be sure that only those in control of the certificates in your browser (version) can tamper with the connection, and everyone else can “just” shut it down. Which is a significant advantage indeed.
I think for the sake of net neutrality it’s really important for ISPs and governments to all not be CAs, but obviously yeah I’m not in charge and it’s not a perfect system. Still, doing what you can to ensure that your independent corner of the web is reaching browsers without tampering is a good thing imo.
[Comment removed by author]
One more reason why I have started to push against these type of “security measures”. People forget that “availability” is part of security, and it’s just a matter of time until we see this being abused.
E.g., as you say, someone manages to sabotage the process of how new certificates are generated - what happens then? And imagine that happens on Let’s Encrypt’s side. And now think of browsers that don’t even allow you to bypass that check anymore. Scary.
FWIW, I have concerns about this development, and I have explained them here: https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/_335unOyteQ
tl;dr by massively increasing the number of certs, we make it harder for people to check certs for irregularities via certificate transparency, and there’s a point where we should wonder if even shorter cert times are worth the downsides.
The obvious end stage of this trend is to have a centralized authority that verifies every transaction independently, which will:
I imagine CloudFlare, Google and Amazon are all salivating. I’m not.
(And it was predicted by Daniel Keys Moran in his 1989 novel The Long Run – which ended up being about enabling a revolution by attacking the global key verification infrastructure.)
Markets with fewer than seven suppliers are unstable and prone to collusion and corruption. Let’s Encrypt is now too big. It would serve the public interest more to split it into a dozen competing/cooperating structures.
I’m literally having trouble thinking of a market in tech that has even seven meaningful competitors. Ugh.
Exactly. websites without encryption must keep working for that reason.
The ACME protocol allows for specifying a desired (server clamped) expiry date. I’ve used this with non LE ACME servers to get shorter expiry. I’m hoping this news means LE will embrace this feature, not that they will just make a new endpoint that does 6 day certificates.
https://community.letsencrypt.org/t/notbefore-and-notafter-are-not-supported/54712/3
Though I think I can understand if supporting custom not-after means more parsing on the server side and this could prove risky.
I kinda want really short certs because it’ll cause me to find out if I accidentally messed up renewal quickly instead of in 90 days.
I find out I messed up renewal on Monday morning when I read the email from cron telling me how it went. Super excited to upgrade that chore from once every few weeks to a few times every week.
Put it in the subject line. Previous cert validity, X. New cert validity, Y. Or just X - > Y
You won’t have to open the email anymore. You’ll know if it succeeded or failed. And you’ll know if the expiry is coming up close enough for you to care.
This can be done by renewing sooner rather than having the full lifecycle shorter.
Basically right now I have a 90d cert which effectively has two phases:
I don’t really care about that 60d period, but I do care about that 30d period as that allows me to deal with the problem when I get back from vacation rather than now.
So 31d certificates renewed faulty would be fine by me if they were renewed daily. This way I will find out if I messed up renewal in a day.
In my dev environment, I renew my certificates once a week. If they expired after 30 days, then I would start getting errors after 30 days. This would allow me to have my production life cycle be longer, thus allowing me to fix the problem in dev before it impacts my prod.
I expect my certificates to be renewed every week, but maybe I’m busy and I don’t check that week.
I’m not sure if I understand - they are saying that by swapping to shorter-lived certs, the window of exposure is reduced, correct? Or 6 days certs somehow helpful in a reactive fashion?
I believe that’s correct; the same reason why they offer 90 day certs now, when pre-LE it was usually a year or two.
It is still a year for all other commercial CAs (yes, they are still in business). Google is trying to lower it to 90 days via the CA/B Forum, but so far it hasn’t succeeded.
If the private key is stolen it is far less useful when it is only valid for a couple of days than if it is valid for almost a year.
There’s a desire to reduce all certificate durations to 45 days which is receiving quite some pushback over on the Github discussion (a lot of people working in environments where renewing certificates that often is just not realistic).
Let’s Encrypt is great but I have never been convinced that a regular person’s private webpage needs all the complexity and overhead of SSL.
HTTPS works both ways—both what you send is protected, but also what you receive. Therefore malicious actors cannot replace the web page content with something else (from ad injection to malicious scripts.)
I wouldn’t want anyone’s ISP to inject ads into my website. Spare the annoyance of seeing ads, they’d ruin my layout!
This makes sense but it’s also insane that we have to guard against this.
It’s not just ISPs. Any public access point should be assumed to be malicious. Without HTTPs, I can set up an AP called _Free_WiFi and people will connect. I can rewrite any HTTP response to inject a payload that contains an exploit for the browser version that the user-agent string advertises. If a visitor to your site is running a browser that has a known vulnerability (take a look at the list of Chrome CVEs. They average more than one every two days, so the odds are that anyone who isn’t autoupdating is vulnerable) then you’ve given anyone who sets up a free hotspot the ability to compromise visitors to your site by not deploying HTTPS.
OK. So HTTPS is kinda like this giant condom we need to put over the entire internet.
Not the entire Internet, just all of the bits between you and the client. Without authenticated encryption, you have to assume that every hop between the client device and your server is trustworthy. In any case where that is not true, at least one end is likely to be vulnerable to things that violate confidentiality or integrity. Closer to the client is usually easier.
When you put it like that, you make it sound eminently reasonable. You’re relying on completely unknown, ever-changing third parties in large quantity, whose interests are opaque and their own security unauditable by you. Yes, defaulting to safer sex is probably the go.
One thing may not be obvious from your argument, so I want to point it out explicitly: Even if your website has HTTPS, a man in the middle can easily downgrade it to HTTP by terminating the SSL themselves. So the browsers have no option other than mistrusting every HTTP site since they can’t know whether you actually intended your site to be HTTP and that is why we need HTTPS everywhere.
It used to be a short-lived business model in the before-widely-used-SSL-days: “free” ISP that messed with your data stream to put their own ads in.
SSL didn’t break that… NetZero and other similar providers used to display ads on your screen, independent of the datastream of things you were browsing. They required a custom dial-up client to connect, and that client showed the ads. They even partnered with hardware makers for a while to make nearly-$0 PCs.
AIUI, the payoff just wasn’t enough to sustain the business, and there wasn’t a price between $0 and what most people were paying for dial-up that would both sustain the business and convince enough people to tolerate the ads. But the “free” ISPs I saw in the 1990s-early00s were not injecting ads onto pages. They were displayed on the PC around the pages.
The one time I’ve seen something like this happen is back when I had home internet with a monthly cap, the ISP would inject a banner at the top of whatever site you were browsing when you had used up 75% of the bandwidth.
in the grand scheme of things, I really don’t think SSL is that “complex”. You can make it way more complex than it needs to be, sure. But installing an ACME client, agreeing to the terms of service of the certificate authority you’re using, and putting it on a cronjob, is probably the least complicated part of setting up a web service.
if your service is small enough, Caddy will do all that for you.
I found it even easier with Apache’s
mod_md. The webserver is always running, so why not have it do the ACME dance?I mean no, it isn’t. I couldn’t be bothered with it so I let CloudFlare terminate SSL for my side project.
Each time I’ve done it, it takes around 15-30 minutes to setup and automate such that I’ve basically never needed to “fix” anything in these setups. If you would like help setting it up correctly with an existing server, I’d be happy to help out. If you just find it too difficult to add to your existing http servers, you could always put your application behind a proxy with renewal built in, such as Caddy or Traefik.
My side project is on a Hetzner box that I manually setup so every additional step is… ugh.
What difference would it make where the server is hosted? Are you like tearing down the server every week and reinstalling everything manually over and over or something?
Fwiw my current iteration of my hetzner box is there for some five years approximately, I don’t think I’ve fiddled with the certbot thing not then once or twice after the initial setup in Cron. You install nginx, you add the cert client, done.
I thought Axum could do TLS internally…
I am really surprised by this change, given the costs that are perfectly explained in this note. Is there, somewhere, some data about LE key compromises, and how they were exploited?
If an attacker gets a private key day 1, I think he has good opportunities to do the same day 5 if he has a foothold or by using the same exploit.
No, that part was clear, I think the question was about cases where this actually happened with Let’s Encrypt. I know there are situations where the keys are compromised from the classic, commercial CAs, but I can’t remember reading about LE being compromised.
The question would be, why are LE people pushing for this? Did a problem happen and we didn’t know?
Yeah that’s exactly what I wanted to ask. It seems to me that LE is going to a lot of trouble to get from 90 to 6 days, and the only justification is that it’s “more secure”, but I wonder why 90 days wouldn’t be considered “secure enough”…
CA compromise is not the only way to acquire TLS private keys. You also have to worry about the web server itself being compromised.
And, if your web server is compromised and your TLS private keys leak, what do you do? Revocation is completely broken in practice (skip the entire first section, and note that since this was written, very recently, Let’s Encrypt dropped OCSP [stapling] support due to abysmal usage numbers). Mass revocation is even more broken, when sysadmins even bother with it.
So this would be a cloak and dagger approach to mitigate instead of reverting the certificates?
[Comment removed by author]