It’s kind of fun trolling around on Google’s certificate transparency search engine to see what software random companies run internally. Put in any company and you’ll probably find their jira and whatnot. Weird form of customer/prospect research.
Both at home and at work, I use an invalid TLD (.lan). Let’s Encrypt, then, is naturally not an option. Instead, I create my own internal CA and create per-device certs only for those devices/services that need them.
I do the same at home. The requisite knowledge was entirely contained in ikectl(8), iked(8), and related man pages. As an amateur, I also had to look up a few terms on Wikipedia.
How do you handle deploying the CA root cert? Last time I tried this, Android whines at you if you have any trusted CA certs that are not part of the default bundle and iOS made it annoying to use. There’s also the problem (which doesn’t occur so much in the pandemic) that anyone visiting your house gets certificate warnings and probably doesn’t want to add your CA to their trusted list (if they did then you could MITM every connection that they make with TLS to any remote that isn’t publishing CCA records).
CAA records don’t prevent your custom CA from issuing certificates, and doesn’t prevent browsers from trusting your certificates if your CA is trusted.
What kind of problems do you have with using a custom CA in iOS? You send the certificate to the device (e.g. you serve it on a web server), you install it with your passcode and then enable it in Settings -> General -> About.
CAA records doesn’t prevent your custom CA from issuing certificates, and doesn’t prevent browsers from trusting your certificates if your CA is trusted.
CCA prevents my CA from issuing a cert for your domain and intercepting client traffic. A public CA that tries this and gets caught will suffer huge penalties (some have gone out of business as a result).
If I visit your house and add your CA cert to the trust anchors in my device to connect to your printer, then you can have your router MITM all TLS connections to domains that don’t have CCA records and the client will happily trust the cert and report a secure connection. Worse, if your signing certificate is compromised (I’m assuming that you’re not using an HSM or doing any of the other things that a CA needs to be certified?) then whoever compromises it can mount similar attacks on any network that I connect to. You can probe this in JavaScript (try doing an HTTP request, catch the exception if the connection is not allowed), so if you’ve got a nice selection of compromised signing certs that people trust then you can do this easily on a public AP.
Oh, and the CCA record only helps if the domain is also using DNSSEC, otherwise the router can just intercept the DNS query and substitute NXDOMAIN for the CCA lookup.
What kind of problems do you have with using a custom CA in iOS? You send the certificate to the device (e.g. you serve it on a web server), you install it with your passcode and then enable it in Settings -> General -> About.
I haven’t tried this for a while, but I think some corporate anti-malware things spotted the cert and marked the device as insecure.
If your security depends, in any form, on the secrecy of your host names, unplug all your systems from the network. Now. You have lost. It’s over. Have a nice day. (I feel like I should add something about stealing fizzy lifting drink…)
Should you use Lets Encrypt for your internal hostnames? I have no idea. But don’t let the concerns in the linked post drive it. Decide based on whether it makes it operationally easy to have verifiable, TLS-secured communications between your internal hosts. Because that might make a difference that an attacker knowing the DNS name of your server will never make.
And if there are secrets on your network, whether they’re things like what your upcoming products will be called or what version of logging software your servers use, don’t encode them in A records or CNAMEs. Regardless of whether those hit cert transparency logs, DNS records that are necessary for operations are terrible places for secrets.
If your security depends, in any form, on the secrecy of your host names, unplug all your systems from the network.
It’s a weak mitigation but defence in depth often includes the composition of some fairly weak mitigations. For example, if I have a browser exploit then I may be able to get local account access on a computer on your corporate network when you visit my web site. I probably don’t care about that machine, I want to use it as a foothold to start attacking your more critical infrastructure. If I start sending a lot of traffic to probe your network topology then I may hit your network IDS and be kicked off the VLAN (and Internet) until someone’s done an offline malware scan of the machine. If I already know the DNS name of a high-value target then I can query your split-horizon DNS server and find its IP address (if it’s IPv6 then random probing will never find it, if it’s IPv4 then I can do exhaustive search but that should trigger the IDS) and can then launch the next step in the attack.
If you’re publishing the names of all of the high-value machines on your network then you’re decreasing the work factor for any attacker that gets past your perimeter defences (and with BYOD, the rule of thumb should always be to assume that your perimeter defences are compromised).
There’s no such thing as perfect network security (except, possibly, an air gap in a Faraday cage). All major operating systems have had remotely exploitable vulnerabilities in their network stacks on a fairly regular basis for the last couple of decades. The best you can do is increase the work factor for attackers and increase the probability of detecting attacks.
I would argue that mitigation and defense of depth is going a bit far these days. Or always was a bad idea to rely on hostnames being secret and in times where it’s ready, cheap and quick to scan the whole IP(v4) space it’s an even worse idea.
Relying on a host to not be known really is a bad idea, also because it can be hard to even realize whether it’s compromised or you see background noise. It is easy for this to give you a false sense of security which is a lot worse than under most likely very rare circumstances of an attacked actually having to struggle finding that host.
Please don’t think that running a vulnerable service and have it be public facing is a good idea. Again, we live in a time where everyone with minor technical skills can scan the whole internet for a certain system/service, without many resourcese. Details may vary, but overall you’ll get a really wrong sense of security if you rely on secrecy.
Aside from the fact that former employees might know about its existence and many other things can (and for various reasons, like standards, etc., even should) leak the host.
Relying on a host to not be known really is a bad idea, also because it can be hard to even realize whether it’s compromised or you see background noise.
It is not about relying on the host being unknown. There are a lot of potential channels that would allow someone to learn about your private network’s topology. If it includes talking to former employees or even scanning their blogs then the attacker needs to be mounting a targeted attack. This happens sometimes but it’s far more likely that one of your employees brought a machine into your network that was compromised by some untargeted malware. If that malware can just report the result of the DHCP query (which includes search domains) to the C&C server and the C&C server can scan the CT logs to get the names of all of the high-value targets on the network then that’s trivial to automate and impossible for you to detect unless you know the C&C server address (it is likely to be a web server running on an ephemeral public cloud VM, so good luck). If it has to probe the address space, that’s also easy to automate but also easy for an off-the-shelf IDS to spot. An attacker may sufficiently rate limit the probes to avoid detection or may use some other side channel (for example sniffing addresses linked to from requests to corporate intranet page), but most of the other approaches are either hard to automate, easy to spot with anomaly detection, or slow.
Please don’t think that running a vulnerable service and have it be public facing is a good idea
This isn’t about Internet facing services this is about private services under attack from some malware on a machine that someone has connected to your private network.
If a place can run a CA securely and manage every end point’s trust store securely, I think I agree with you. They’re better off not publishing the names.
But if they’re going to do a bad job running a CA or do a bad job managing trust stores (such that an attacker can inject their own CA into endpoint trust stores or such that users are trained to ignore unknown CA warnings) or if they’re not going to use TLS on internal services because they don’t want to do these things, I think the better defense in depth posture is to leak the hostnames to the transparency log.
The majority of places I’ve seen are better off with the leak in the log, IMO.
While you are completely correct that secrets do not belong in DNS, I think it is important to give as little information as possible to an attacker. It’s part of defense in depth: make an attacker work to get information about your internal operations and organisation, and many attackers will skip you and move on to softer targets.
I don’t disagree about defense in depth; while my network topology is not secret, I don’t hand out diagrams at the door either. This piece is encouraging a bad defense in depth choice, though, in my opinion. Using TLS between your internal hosts is a good defense in depth choice. Skipping it because it’s too hard to issue certificates or too hard to add a custom CA to everything on the LAN’s trust store weakens your posture more, IMO, than handing out a list of server names. Doing a bad job running an internal CA and letting that get compromised first is even worse.
For most shops, I think not having to manage trust stores and not having to run a CA is a beneficial trade off for handing out a bunch of host names that shouldn’t be secret anyway.
The title is a bit misleading, as “Let’s Encrypt” is not synonymous with “ssl cert providers” or “public CAs” or whatever. Using, say, COMODO or ZeroSSL, instead isn’t any better than Let’s Encrypt, for the same “public logs” reasons. Just note that using public CAs can expose internal hostnames.
Every CA is required to publish signed certificates to Certificate Transparency; if your public CA does not do so, they are noncompliant with the rules from CAB Forum and will probably soon be removed as a public CA.
Couldn’t there be (is there) a packaged-up internal CA and DNS with ACME protocol so you’d basically have an internal Let’s Encrypt? Still a pain to configure the local root cert, but the automatic issuance would make up for it.
I use step-ca for my home lab and it’s been great for the last year or so. I don’t run k8s or similar (just a mix of FreeBSD and Debian) so there’s a small setup for each new service (I’ve only partially automated it with ansible) but it’s been really great this far.
So on my internal network most of the things speak TLS. I have yet to start using ‘mTLS’ (that is, client certificates) on relevant places, and I’ve been to lazy to even looking into TLS for my IoT stuff (like tasmota switches), I just threw them into their own locked down subnet. Smallstep has a nice guide on how to setup mTLS between services which is nice.
I use the ACME provisioning interface on step-ca, and the normal certbot client to fetch the certificates (remember to change the renewal check to 8 hours!). I’d like to use some other ACME client as it’s a lot of dependencies per client but haven’t found one that is readily available and handles internal providers like this.
I know for a fact that there’s a version of step-ca that can handle ‘real’ certificates as well. I’ve looked into it for $DAYJOB to handle both company internal (private CA) and external certificates (using a ‘real’ certificate) but went on paternity leave before the project really started. It’s not cheap but it seems to be nice.
For another ACME client, I used acme.sh (entirely in shell!) for a bit when certbot was giving me headaches. It doesn’t look like they allow expiration times in hours yet, though. (I left a wishlist item for this.)
Thinking about it, while I think the 24-hour certificate lifespan is nifty, I’d probably be happy with longer ones - say, 3 days, or 7, and renewing when there’s 1 or 2 days left.
Monitoring internal cert expiration could be interesting.
I work on SPIFFE which is a fancy way to assign custom certificates for internal services (intended for service-service communication, not for browsers). It has many advantages over ACME for this situation. Check it out!
Having read through this post, this entire comment section, and the one on Orange Site, I guess I have to admit that there really isn’t a perfect solution for automated internal TLS.
Let’s Encrypt or any other ACME-compatible CA requires public HTTP access or public TXT records. Public HTTP is a non-starter, and BIND doesn’t have a concept of “private A record, but public TXT record” AFAIK. So you have to either expose your RFC1918 IPs in DNS records (bad practice) or not have TLS.
A traditional CA like DigiCert will let us generate valid certs without any kind of DNS/HTTP challenge, for a price. Which is so frustrating because 1) why don’t they need all the extra validation that ACME does, and 2) feels like the price increases and yearly fees are approaching pure rent-seeking, profiting off the billions of EoL devices that will simply never get more CAs in their trust store.
And an internal CA has challenges too. Sure you can push the CA cert to all employee laptops pretty easily, but how about every VM? Every container? Every VM/container on every laptop? Every IoT device in all conference rooms? Every virtual appliance where you can’t even SSH in without typing some secret code into an interactive session? If you can’t do all that, can you afford to have every dev independently set that up in every container/VM they run? They’d likely prefer to just set skipTLSVerify at that point.
I saw mentions of “name constrained” CAs which may help the problem, but please, someone save me from the TLS prison.
I give all my machines valid dns names and get certificates for one that is “internal” . It is only reachable over wireguard and hosts my password manager (vaulwarden).
My work IT is sort of sucky and disorganized, so sure. Ok you can see I use svn in 2021. Oh well. Also, really? Super secret partnership or merger domain names? I mean I guess that could happen, but it’s not like transparency logs are the only way this would leak.
I can’t imagine why a host name being public would be so bad. I still find it annoying having to do dns based discovery queries or just scanning a whole subnet. I helped my mom find a work service because their new dns setup didn’t (at the time) block zone transfers so it seems pretty useful to have dns be for, well looking up domains.
I don’t see acme-dns 1 mentioned here, it’s a small tool for proxying cert requests for internal hostnames and validating them against Let’s Encrypt. I use it for a handful of domains and it made my internal hostnames pretty painless.
That doesn’t address the problem. You publish the TXT records for the ACME challenge with acme-dns, Let’s Encrypt probes these and issues a certificate. This certificate is published in the public certificate transparency log. Everyone on the Internet can then see that this subdomain exists.
yes you are absolutely right. I posted this based on the assumption discussed here that if one wants to prevent internal hostnames getting logged publicly they might have bigger problems than that.
It’s kind of fun trolling around on Google’s certificate transparency search engine to see what software random companies run internally. Put in any company and you’ll probably find their jira and whatnot. Weird form of customer/prospect research.
https://transparencyreport.google.com/https/certificates?hl=en
I saw crt.sh first, and the interface is more concise.
Both at home and at work, I use an invalid TLD (
.lan
). Let’s Encrypt, then, is naturally not an option. Instead, I create my own internal CA and create per-device certs only for those devices/services that need them.Look into step-ca, you can setup an internal ACME instance and use letsencrypt clients (at least certbot) to fetch and renew certificates.
I do the same at home. The requisite knowledge was entirely contained in ikectl(8), iked(8), and related man pages. As an amateur, I also had to look up a few terms on Wikipedia.
How do you handle deploying the CA root cert? Last time I tried this, Android whines at you if you have any trusted CA certs that are not part of the default bundle and iOS made it annoying to use. There’s also the problem (which doesn’t occur so much in the pandemic) that anyone visiting your house gets certificate warnings and probably doesn’t want to add your CA to their trusted list (if they did then you could MITM every connection that they make with TLS to any remote that isn’t publishing CCA records).
I don’t use any of the custom CA’d services on my network with my Android phone. I only use my laptops for accessing those things.
CAA records don’t prevent your custom CA from issuing certificates, and doesn’t prevent browsers from trusting your certificates if your CA is trusted.
What kind of problems do you have with using a custom CA in iOS? You send the certificate to the device (e.g. you serve it on a web server), you install it with your passcode and then enable it in Settings -> General -> About.
CCA prevents my CA from issuing a cert for your domain and intercepting client traffic. A public CA that tries this and gets caught will suffer huge penalties (some have gone out of business as a result).
If I visit your house and add your CA cert to the trust anchors in my device to connect to your printer, then you can have your router MITM all TLS connections to domains that don’t have CCA records and the client will happily trust the cert and report a secure connection. Worse, if your signing certificate is compromised (I’m assuming that you’re not using an HSM or doing any of the other things that a CA needs to be certified?) then whoever compromises it can mount similar attacks on any network that I connect to. You can probe this in JavaScript (try doing an HTTP request, catch the exception if the connection is not allowed), so if you’ve got a nice selection of compromised signing certs that people trust then you can do this easily on a public AP.
Oh, and the CCA record only helps if the domain is also using DNSSEC, otherwise the router can just intercept the DNS query and substitute NXDOMAIN for the CCA lookup.
I haven’t tried this for a while, but I think some corporate anti-malware things spotted the cert and marked the device as insecure.
If your security depends, in any form, on the secrecy of your host names, unplug all your systems from the network. Now. You have lost. It’s over. Have a nice day. (I feel like I should add something about stealing fizzy lifting drink…)
Should you use Lets Encrypt for your internal hostnames? I have no idea. But don’t let the concerns in the linked post drive it. Decide based on whether it makes it operationally easy to have verifiable, TLS-secured communications between your internal hosts. Because that might make a difference that an attacker knowing the DNS name of your server will never make.
And if there are secrets on your network, whether they’re things like what your upcoming products will be called or what version of logging software your servers use, don’t encode them in A records or CNAMEs. Regardless of whether those hit cert transparency logs, DNS records that are necessary for operations are terrible places for secrets.
It’s a weak mitigation but defence in depth often includes the composition of some fairly weak mitigations. For example, if I have a browser exploit then I may be able to get local account access on a computer on your corporate network when you visit my web site. I probably don’t care about that machine, I want to use it as a foothold to start attacking your more critical infrastructure. If I start sending a lot of traffic to probe your network topology then I may hit your network IDS and be kicked off the VLAN (and Internet) until someone’s done an offline malware scan of the machine. If I already know the DNS name of a high-value target then I can query your split-horizon DNS server and find its IP address (if it’s IPv6 then random probing will never find it, if it’s IPv4 then I can do exhaustive search but that should trigger the IDS) and can then launch the next step in the attack.
If you’re publishing the names of all of the high-value machines on your network then you’re decreasing the work factor for any attacker that gets past your perimeter defences (and with BYOD, the rule of thumb should always be to assume that your perimeter defences are compromised).
There’s no such thing as perfect network security (except, possibly, an air gap in a Faraday cage). All major operating systems have had remotely exploitable vulnerabilities in their network stacks on a fairly regular basis for the last couple of decades. The best you can do is increase the work factor for attackers and increase the probability of detecting attacks.
I would argue that mitigation and defense of depth is going a bit far these days. Or always was a bad idea to rely on hostnames being secret and in times where it’s ready, cheap and quick to scan the whole IP(v4) space it’s an even worse idea.
Relying on a host to not be known really is a bad idea, also because it can be hard to even realize whether it’s compromised or you see background noise. It is easy for this to give you a false sense of security which is a lot worse than under most likely very rare circumstances of an attacked actually having to struggle finding that host.
Please don’t think that running a vulnerable service and have it be public facing is a good idea. Again, we live in a time where everyone with minor technical skills can scan the whole internet for a certain system/service, without many resourcese. Details may vary, but overall you’ll get a really wrong sense of security if you rely on secrecy.
Aside from the fact that former employees might know about its existence and many other things can (and for various reasons, like standards, etc., even should) leak the host.
It is not about relying on the host being unknown. There are a lot of potential channels that would allow someone to learn about your private network’s topology. If it includes talking to former employees or even scanning their blogs then the attacker needs to be mounting a targeted attack. This happens sometimes but it’s far more likely that one of your employees brought a machine into your network that was compromised by some untargeted malware. If that malware can just report the result of the DHCP query (which includes search domains) to the C&C server and the C&C server can scan the CT logs to get the names of all of the high-value targets on the network then that’s trivial to automate and impossible for you to detect unless you know the C&C server address (it is likely to be a web server running on an ephemeral public cloud VM, so good luck). If it has to probe the address space, that’s also easy to automate but also easy for an off-the-shelf IDS to spot. An attacker may sufficiently rate limit the probes to avoid detection or may use some other side channel (for example sniffing addresses linked to from requests to corporate intranet page), but most of the other approaches are either hard to automate, easy to spot with anomaly detection, or slow.
This isn’t about Internet facing services this is about private services under attack from some malware on a machine that someone has connected to your private network.
If a place can run a CA securely and manage every end point’s trust store securely, I think I agree with you. They’re better off not publishing the names.
But if they’re going to do a bad job running a CA or do a bad job managing trust stores (such that an attacker can inject their own CA into endpoint trust stores or such that users are trained to ignore unknown CA warnings) or if they’re not going to use TLS on internal services because they don’t want to do these things, I think the better defense in depth posture is to leak the hostnames to the transparency log.
The majority of places I’ve seen are better off with the leak in the log, IMO.
While you are completely correct that secrets do not belong in DNS, I think it is important to give as little information as possible to an attacker. It’s part of defense in depth: make an attacker work to get information about your internal operations and organisation, and many attackers will skip you and move on to softer targets.
I don’t disagree about defense in depth; while my network topology is not secret, I don’t hand out diagrams at the door either. This piece is encouraging a bad defense in depth choice, though, in my opinion. Using TLS between your internal hosts is a good defense in depth choice. Skipping it because it’s too hard to issue certificates or too hard to add a custom CA to everything on the LAN’s trust store weakens your posture more, IMO, than handing out a list of server names. Doing a bad job running an internal CA and letting that get compromised first is even worse.
For most shops, I think not having to manage trust stores and not having to run a CA is a beneficial trade off for handing out a bunch of host names that shouldn’t be secret anyway.
The title is a bit misleading, as “Let’s Encrypt” is not synonymous with “ssl cert providers” or “public CAs” or whatever. Using, say, COMODO or ZeroSSL, instead isn’t any better than Let’s Encrypt, for the same “public logs” reasons. Just note that using public CAs can expose internal hostnames.
s/can/will/
Every CA is required to publish signed certificates to Certificate Transparency; if your public CA does not do so, they are noncompliant with the rules from CAB Forum and will probably soon be removed as a public CA.
cough wildcards cough
Couldn’t there be (is there) a packaged-up internal CA and DNS with ACME protocol so you’d basically have an internal Let’s Encrypt? Still a pain to configure the local root cert, but the automatic issuance would make up for it.
Looks like step-ca does what you want. Reading some directions, it looks like they default to 24-hour certificate lifetimes, which is honestly nifty!
I use step-ca for my home lab and it’s been great for the last year or so. I don’t run k8s or similar (just a mix of FreeBSD and Debian) so there’s a small setup for each new service (I’ve only partially automated it with ansible) but it’s been really great this far.
So on my internal network most of the things speak TLS. I have yet to start using ‘mTLS’ (that is, client certificates) on relevant places, and I’ve been to lazy to even looking into TLS for my IoT stuff (like tasmota switches), I just threw them into their own locked down subnet. Smallstep has a nice guide on how to setup mTLS between services which is nice.
I use the ACME provisioning interface on step-ca, and the normal certbot client to fetch the certificates (remember to change the renewal check to 8 hours!). I’d like to use some other ACME client as it’s a lot of dependencies per client but haven’t found one that is readily available and handles internal providers like this.
I know for a fact that there’s a version of step-ca that can handle ‘real’ certificates as well. I’ve looked into it for $DAYJOB to handle both company internal (private CA) and external certificates (using a ‘real’ certificate) but went on paternity leave before the project really started. It’s not cheap but it seems to be nice.
TL;DR step-ca is great.
For another ACME client, I used acme.sh (entirely in shell!) for a bit when certbot was giving me headaches. It doesn’t look like they allow expiration times in hours yet, though. (I left a wishlist item for this.)
Thinking about it, while I think the 24-hour certificate lifespan is nifty, I’d probably be happy with longer ones - say, 3 days, or 7, and renewing when there’s 1 or 2 days left.
Monitoring internal cert expiration could be interesting.
I work on SPIFFE which is a fancy way to assign custom certificates for internal services (intended for service-service communication, not for browsers). It has many advantages over ACME for this situation. Check it out!
Having read through this post, this entire comment section, and the one on Orange Site, I guess I have to admit that there really isn’t a perfect solution for automated internal TLS.
Let’s Encrypt or any other ACME-compatible CA requires public HTTP access or public TXT records. Public HTTP is a non-starter, and BIND doesn’t have a concept of “private A record, but public TXT record” AFAIK. So you have to either expose your RFC1918 IPs in DNS records (bad practice) or not have TLS.
A traditional CA like DigiCert will let us generate valid certs without any kind of DNS/HTTP challenge, for a price. Which is so frustrating because 1) why don’t they need all the extra validation that ACME does, and 2) feels like the price increases and yearly fees are approaching pure rent-seeking, profiting off the billions of EoL devices that will simply never get more CAs in their trust store.
And an internal CA has challenges too. Sure you can push the CA cert to all employee laptops pretty easily, but how about every VM? Every container? Every VM/container on every laptop? Every IoT device in all conference rooms? Every virtual appliance where you can’t even SSH in without typing some secret code into an interactive session? If you can’t do all that, can you afford to have every dev independently set that up in every container/VM they run? They’d likely prefer to just set skipTLSVerify at that point.
I saw mentions of “name constrained” CAs which may help the problem, but please, someone save me from the TLS prison.
I give all my machines valid dns names and get certificates for one that is “internal” . It is only reachable over wireguard and hosts my password manager (vaulwarden).
My work IT is sort of sucky and disorganized, so sure. Ok you can see I use svn in 2021. Oh well. Also, really? Super secret partnership or merger domain names? I mean I guess that could happen, but it’s not like transparency logs are the only way this would leak.
I can’t imagine why a host name being public would be so bad. I still find it annoying having to do dns based discovery queries or just scanning a whole subnet. I helped my mom find a work service because their new dns setup didn’t (at the time) block zone transfers so it seems pretty useful to have dns be for, well looking up domains.
Finding new product names perhaps? https://caiustheory.com/lets-peek/ 😂
Wildcard certificates would reduce the leak though.
I don’t see acme-dns 1 mentioned here, it’s a small tool for proxying cert requests for internal hostnames and validating them against Let’s Encrypt. I use it for a handful of domains and it made my internal hostnames pretty painless.
That doesn’t address the problem. You publish the TXT records for the ACME challenge with acme-dns, Let’s Encrypt probes these and issues a certificate. This certificate is published in the public certificate transparency log. Everyone on the Internet can then see that this subdomain exists.
yes you are absolutely right. I posted this based on the assumption discussed here that if one wants to prevent internal hostnames getting logged publicly they might have bigger problems than that.