Do we really want an internet where the use of encryption requires authorization?
Having opted out of having the authorities say I’m me, can I opt out of having them say anybody else is me? Alas, no. There is a secret browser handshake to partially opt out, but the wrinkle is that it first requires opting in. No way to actually decline the whole mess.
Ah, but there is!
cjdns (https://github.com/cjdelisle/cjdns/blob/master/doc/Whitepaper.md) was designed in such a way to provide all these benefits in a completely decentralized manner. Without having to manually verify and pin third-party certificates or CA’s, the cjdns system allows anyone to create a secure address by generating a key - your resulting IPv6 address is simply a hash of the public key trimmed to the first 32 characters - all connections and all traffic is encrypted end-to-end with perfect forward secrecy, and there are multiple options available to interact with the legacy clear Internet as well.
Especially when you realize that most of these decentralized, private, or whatever schemes depend on backbones run by greedy, scheming companies in countries whose governments don’t like the products in question. Quite a dependency for something that wants to leave the legacy Internet behind. ;)
cjdns doesn’t necessarily depend on the backbones. You can use peer-to-peer links that avoid The Internet. Some people are already trying to build mesh networks in various cities: https://docs.meshwith.me/meshlocals/existing/
There is not any need for any existing Internet infrastructure with cjdns, but it does include support for an overlay network that will tunnel cjdns traffic over existing legacy networks like the Internet - the grand plan is that the overlay network will be used only to link distant meshes together until the mesh itself is complete and there is no need for the overlay.
cjdns is currently in active use via Ethernet, various radio technologies, and RONJA optical connections without utilizing existing Internet infrastructure.
That’s good. Now I ask you two what is the current majority of cjdns use in terms of underlying transport? It would be exciting if most of their users aren’t on Internet or going through central providers.
It wouldn’t really be possible to tell, and it depends on how you define the “majority of cjdns use”. For example, all meshlocals interconnect via the overlay so all communications between disparate meshes, for now, usually takes place tunneled over existing infrastructure.
Gotcha. Ill retract and hold off with my counterpoint on cjdbs for now since it might be a real counterexample to the status quo. Ill look into it more at some point.
Ha! This is the first time I’ve ever seen anyone (other than enterprise IT departments) serve an out-of-band root CA trust certificate, in case someone needs it for their blog.
Then to sign the leaf, instead of the x509 command used to self sign, there’s also some -CAkey -CAserial options. OpenSSL also provides a “ca” command because why not, but I used x509 with extra arguments.
On the other hand, I don’t have any further experience and I’d like to learn what it’s like to run a CA for more audience than myself, what are best practices for this, etc.
But how do I know that I’m downloading the bona fide tedu certificate? I’ve clicked through the HTTPS warnings, so all bets are off as to whether the content I’m being served has been manipulated by a 3rd party. In theory, an attacker could mitm the connection, spoof the certificate with his own and the change the content at https://www.tedunangst.com/ca-tedunangst-com.crt to download his own cert. How can I verify that this hasn’t happened? If the cert has been issued by a root CA and is valid for the page, I know that the content hasn’t been tampered with, assuming I trust the certificate issuer / the root CA selection process.
I agree that implicitly trusting a group of organisations to not fuck up and/or act maliciously is not ideal, but I don’t think everybody using self-signed certs solves the problem either.
So this gets back to the reasons why everybody needs https. We might consider the case of the naughty ISP that injects ads into http pages. This is fairly easy to do in a generic way for plaintext pages.
To intercept traffic now, with a custom cert, they need to replace your download. Then they need to use that cert to sign the cert actually used for the web server. And they probably need to replace the sha256 on the home page or somebody might notice what’s up. So this gets pretty complicated and needs to be customized for every site.
And then there’s the possibility that you’ve already downloaded the cert at some time in the past. Intercepting traffic now is a fairly risky gambit. The naughty ISP needs to commit to intercepting before they even know what file you’re downloading.
As an exercise, consider the threat model to the interceptor. What conditions need to be met for them to avoid detection?
To intercept traffic now, with a custom cert, they need to replace your download. Then they need to use that cert to sign the cert actually used for the web server. And they probably need to replace the sha256 on the home page or somebody might notice what’s up. So this gets pretty complicated and needs to be customized for every site.
It’s as custom for the ISP as it is for the host and for the user. Any standardized way of doing this, the ISP can get a turnkey solution that will MITM for them. Any nonstandardized way of doing this is too tedious for the user to work with.
And then there’s the possibility that you’ve already downloaded the cert at some time in the past. Intercepting traffic now is a fairly risky gambit. The naughty ISP needs to commit to intercepting before they even know what file you’re downloading.
How is it a risk? The user sees a certificate with no connection to any known root, which could have been created by anyone. Someone between me and tedunangst.com is malicious, sure, there are a lot of malicious people on the internet, that’s not the ISP’s fault. There’s no accountability.
After thinking of this a few days - I have some comments I’d like to air out.
I would make the argument that this is not necessarily such a good idea - and say that no matter how absolutely correct tedu’s CA cert and his implementation is, how locked down it is, etc., it is extremely bad precedent from a usability perspective to expect most users to start installing new root CA certificates on a regular basis - at least with current OS and browser interfaces. Sometimes the cert will need to be installed in multiple places to take effect globally on a system as well! If this became standard practice we might have hundreds of thousands of new ‘personal’ CA certs to manage and trust and ensure are correctly and securely configured and managed by their owners, etc. It seems like it’s mistakes waiting to happen on all sides.
I would even say if things keep going this way, eventually, somebody “enterprising” is going to come along and offer some new ‘trusted’ service that offers some level of assurance and checks out all these new personal CA’s and manages their certs, and then aren’t we essentially back to square-one, with a CA for personal CAs and browsers not trusting the certs outside the trusted bundles - yes - back to square one.
A better solution to me would be the SSH-model when not using the optional PKI/CA mode, where a previously un-trusted self-signed certificate is presented to the user by the browser for verification and then the key is pinned for future comparisons.
In a related tangent - it would probably make sense for major browsers to immediately - yesterday! - stop giving users TLS/SSL errors when accessing servers using self-signed certificates as long as they are running on local loop-back interfaces. This alone would eliminate a lot of issues.
Thank you for using (and explaining) the name constraint extension. It’s a really useful feature for cases like this one.
You point it out as odd that the certificate itself mandates the constraints (instead of the user) and I agree that user control would be an interesting (advanced!) feature here.
But with a parent node in a chain of trust, I’d say it makes sense again. Especially given that every certificate you can get these days is not generated by you but for you, with all data and attributes created and modified on your behalf.
Having said that and understanding your reluctance to rely on a third-party (i.e., a real CA) for your availability, I personally can’t find a way to accept this as a valid concern. Isn’t it very hypothetical?
I like to compare to the ssh model, which isn’t perfect either, but is often simpler and fails in more predictable ways. When I add a key to known_hosts, I specify the hostname and the key. But doing so doesn’t automatically mean I trust that key for all the other hostnames embedded within it (of which there aren’t any, but you see the point.) In my opinion, asking a user to inspect a cert and make sure it only does what it says it does is high risk and prone to failure. If you inspect the cert I’ve provided with the right tools, you can asses what it does, though of course I could also misspell (perhaps with Unicode) some fields, or toss an inconspicuous but quite powerful dot in somewhere. Making the user enter the name of the site they trust it for would be much safer. We’ve tried to make things “easy”, but the end result is a system that’s actually incredibly difficult to use safely.
We can point our fingers at OCSP in this case, but I think that’s in sufficiently close proximity to justify concerns about systemic fragility.
I did try LE back when they started, but was rejected because my email was “malformed” which isn’t the problem you think. It wasn’t because I had a plus in it. It’s because I don’t have an A record for my domain. I only need an MX record. So that’s two problems. One observed by lots of people, and a second (quite minor) one that I personally experienced. When people tell me to try it because “it just works” I’m skeptical because I’ve seen it not work. I’m picking on LE, but I have little reason to believe they are outliers in this regard.
I like to compare to the ssh model, which isn’t perfect either, but is often simpler and fails in more predictable ways. When I add a key to known_hosts
I think the problem starts here. Most people seem to be interested in transport encryption and not authenticity, i.e. they care more about not being spied upon than the fact if Bob really is Bob. I’d argue that’s why everybody just says yes to “add key to known_hosts” or “do you want to trust this cert”. But this my opinion as a layman.
What I like about the SSH model is that it comes with cert pinning built-in but then again, I normally have control over boxes I ssh in so I know when host keys changed, but I will never be in a position to know if it’s ok that the cert for e.g. Amazon changed. So what we would need is maybe something like OpenBSD key rotation where my current cert also knows about the next cert and my browser can check if a new cert is actually ok. Question remains how to built trust on your first visit and what about homoglyphs…
I sometimes feel like checking the authenticity of a given 3rd party entity on the internet is a lost cause.
I’d love it if the general public could be relied on to know the difference between transport encryption and authenticity.
For use-cases like “is this the real amazon.com, is this really my credit union”, authenticity continues to be important. I agree that it’s far less important for blogs, or for my own favorite transport-encryption example - not leaking your webmd history to MITMs.
Absolutely but I think most people are more concerned about somebody spying on them than about running into an imposter / MITM thus they click away the error so they can get to the content. Funny enough I think this is a statement about how nice humanity actually is because we are not expecting that a stranger is going to rip us off at first sight.
I don’t want to leave the impression that I think authenticity is unimportant. But I have grown the impression that our subconsciousness wants to believe imposters are nothing but a product of our fantasies and for good reasons, imagine a world in which we would constantly question the authenticity of the information provided. I doubt it would be a nice place to live in.
Thus I think a solution that involves user interaction is destined to fail. But I am starting to be way off-topic.
Making the user enter the name of the site they trust it for would be much safer. We’ve tried to make things “easy”, but the end result is a system that’s actually incredibly difficult to use safely.
I’m also not sure why user-specified and cert-specified would be mutually exclusive. Using the intersection of them would make perfect sense. This way, a root certs can claim it’s valid for anything, but I might want to trust it only for *.blah.com.
It is somewhat annoying that this sort of scoping is available in my adblocker, but not in my TLS trust model.
From what I understand, most CA’s today wont just cross sign a customer CA though. Doing so would in fact likely get them marked as untrusted in most browsers I imagine. Combined with (based on my readings) somewhat spotty support for name constraints, the best you can hope for today seems to be either flashing lights and klaxons (self signed cert warnings), or hoping for the best and installing/trusting the signing private CA (many corporations do this for internal uses).
One problem with this move to a self-run CA now is that my RSS feeder is unable to fetch the feed, as it correctly does proper cert checks. I would need to hack around to get it working again on a not-run-under-my-control instance on Heroku.
This all the point of the article and the broken CA system.
in both Firefox and Chrome this produces an “unknown issuer” error
Unknown to who? If it didn’t say the issuer was unknown, who would the issuer be? Would you know who they are?
appears to be issued by Ted’s own CA
So you do know who the issuer is. Just your web browser doesn’t. If you know who Ted is and trust him, wouldn’t you trust his certificate? If you don’t know who he is or don’t trust him, if he had gotten a certificate from CA included in Firefox, now it’s ok to trust this Ted guy?
On principle, I don’t click through cert errors
They have you trained well. ;-)
Web browsers have helped complicate the problems of the CA trust model under the guise of usability (and people just don’t want to have to care) by taking control of the user’s trust away from them. Even those who know better and want to handle it themselves.
I appreciate Ted, who has enough traffic to be noticed, going against the established model and demonstrating another method.
What’s missing here, is that @tedu has not shared, out of band, the certificate fingerprints.
trust the Firefox organization and processes, far more than any lone individual.
Sounds like a good idea except for WoSign, StartCom, Government certs, etc. I had to keep removing these (through a many step process), and they kept coming back with updates, long before Firefox removed them themselves. Given the state of the CA system now, I’m all for Firefox and others helping to curate the list of CA’s but I don’t like this implicit trust without thought that the UX creates and little to no attention to usability of managing that trust as a user.
To put it another way. Firefox is not perfect. Ted is not perfect, no one is perfect. But the design takes away control from me to manage the risks and tries to force (or imply that you can have) 100% trust in Firefox (or browser company X).
That’s posted on the home page. It’s not included in the flak post following the principle that important information should only be maintained in one place. I’ve added a note and a link. That was an oversight.
It’s the hash of the file, not some internal fingerprint, because I find that easier to verify with simpler tools. You don’t even need to decode it to at least verify it’s the same file I say it is.
I’m all for browsers making it easier to manage your own trust roots. But realistically the end user is a lot more imperfect than Firefox most of the time; good defaults are by far the most important part of what the likes of Firefox need to be doing, and I’d actually consider WoSign/StartCom/… a success story - Firefox et al did the right thing, and sent a much stronger message than individuals acting alone ever would. Government surveillance is the kind of thing that requires collective action to counter - uncoordinated individual opposition doesn’t cut it.
Heck, I’m probably one of the most paranoid 0.1% of users, but I never curated my root certificate list. There are only so many hours in the day, I have things to be doing, I’m not going to evaluate an individual CA for every website I go to. At best I’d use a list run by the EFF or someone, but really that someone might as well be Firefox.
I don’t know what he’s proposing, but it’s hard to imagine what advantage it offers that he can’t get by using a CA-signed certificate. Installing the Ted CA doesn’t stop another CA from signing tedunangst.com. It does give Ted the authority to sign certificates for other websites, which I don’t want - 150 root CAs is pretty bad but 151 is still worse. If he has mechanisms for publishing fingerprints out of band, why not just do that with his site’s certificate? If he doesn’t trust the CAs, there are any number of mechanisms - HPKP, DANE,… - for increasing authentication while remaining compatible with the existing CA system, which, for all its flaws, is pretty effective. If he’s not willing to cooperate with the most effective, widely deployed security mechanism then screw him; I can live without his blog, it’s not worth the amount of time it would take me to figure out whether he’s just being awkward or actually wants to compromise my security. If he really wants to run a CA, he can go through the process to get it approved by Firefox; they’re far more capable of doing audits than I am.
I agree with a lot of that, but still disagree on some things but I think I have made those points and don’t want to keep ranting on. :)
I will however agree with this:
If he has mechanisms for publishing fingerprints out of band, why not just do that with his site’s certificate?
I did not add his CA to my list of Authorities for reasons you state (and Ted talks about this in the article). I only accepted the server certificate. Why is that OK and not the CA? Because the day before, I was accessing his site in plain text. Have I been MITMed? Who cares? I could have been getting MITMed for years while going there.
Yesterday there was a hole in the redirect to give some notice of the coming service disruption. Not sure how long I should leave it there however. Or what all should be excluded from https (this page, the cert, the home page?). I opted in favor of strictness for now.
Ah, but there is!
cjdns (https://github.com/cjdelisle/cjdns/blob/master/doc/Whitepaper.md) was designed in such a way to provide all these benefits in a completely decentralized manner. Without having to manually verify and pin third-party certificates or CA’s, the cjdns system allows anyone to create a secure address by generating a key - your resulting IPv6 address is simply a hash of the public key trimmed to the first 32 characters - all connections and all traffic is encrypted end-to-end with perfect forward secrecy, and there are multiple options available to interact with the legacy clear Internet as well.
“The legacy clear Internet” is a phrase with far-reaching implications. :)
I doubt I’ll ever use it, but this is very neat, thank you for pointing it out.
Especially when you realize that most of these decentralized, private, or whatever schemes depend on backbones run by greedy, scheming companies in countries whose governments don’t like the products in question. Quite a dependency for something that wants to leave the legacy Internet behind. ;)
cjdns doesn’t necessarily depend on the backbones. You can use peer-to-peer links that avoid The Internet. Some people are already trying to build mesh networks in various cities: https://docs.meshwith.me/meshlocals/existing/
There is not any need for any existing Internet infrastructure with cjdns, but it does include support for an overlay network that will tunnel cjdns traffic over existing legacy networks like the Internet - the grand plan is that the overlay network will be used only to link distant meshes together until the mesh itself is complete and there is no need for the overlay.
cjdns is currently in active use via Ethernet, various radio technologies, and RONJA optical connections without utilizing existing Internet infrastructure.
That’s good. Now I ask you two what is the current majority of cjdns use in terms of underlying transport? It would be exciting if most of their users aren’t on Internet or going through central providers.
It wouldn’t really be possible to tell, and it depends on how you define the “majority of cjdns use”. For example, all meshlocals interconnect via the overlay so all communications between disparate meshes, for now, usually takes place tunneled over existing infrastructure.
Gotcha. Ill retract and hold off with my counterpoint on cjdbs for now since it might be a real counterexample to the status quo. Ill look into it more at some point.
Ha! This is the first time I’ve ever seen anyone (other than enterprise IT departments) serve an out-of-band root CA trust certificate, in case someone needs it for their blog.
https://www.tedunangst.com/ca-tedunangst-com.crt
Pretty awesome!
How does one become their own Certificate Authority?
To start, I followed the instructions in openbsd ssl man page. https://man.openbsd.org/ssl.8
Then there’s some extra options you need in an extensions file. https://www.openssl.org/docs/man1.0.2/apps/x509v3_config.html
Then to sign the leaf, instead of the x509 command used to self sign, there’s also some -CAkey -CAserial options. OpenSSL also provides a “ca” command because why not, but I used x509 with extra arguments.
It’s rather easy to start: https://wiki.archlinux.org/index.php/Easy-RSA. The certificate signing protocol seems complicated at first, but it is logical.
On the other hand, I don’t have any further experience and I’d like to learn what it’s like to run a CA for more audience than myself, what are best practices for this, etc.
But how do I know that I’m downloading the bona fide tedu certificate? I’ve clicked through the HTTPS warnings, so all bets are off as to whether the content I’m being served has been manipulated by a 3rd party. In theory, an attacker could mitm the connection, spoof the certificate with his own and the change the content at https://www.tedunangst.com/ca-tedunangst-com.crt to download his own cert. How can I verify that this hasn’t happened? If the cert has been issued by a root CA and is valid for the page, I know that the content hasn’t been tampered with, assuming I trust the certificate issuer / the root CA selection process.
I agree that implicitly trusting a group of organisations to not fuck up and/or act maliciously is not ideal, but I don’t think everybody using self-signed certs solves the problem either.
So this gets back to the reasons why everybody needs https. We might consider the case of the naughty ISP that injects ads into http pages. This is fairly easy to do in a generic way for plaintext pages.
To intercept traffic now, with a custom cert, they need to replace your download. Then they need to use that cert to sign the cert actually used for the web server. And they probably need to replace the sha256 on the home page or somebody might notice what’s up. So this gets pretty complicated and needs to be customized for every site.
And then there’s the possibility that you’ve already downloaded the cert at some time in the past. Intercepting traffic now is a fairly risky gambit. The naughty ISP needs to commit to intercepting before they even know what file you’re downloading.
As an exercise, consider the threat model to the interceptor. What conditions need to be met for them to avoid detection?
It’s as custom for the ISP as it is for the host and for the user. Any standardized way of doing this, the ISP can get a turnkey solution that will MITM for them. Any nonstandardized way of doing this is too tedious for the user to work with.
How is it a risk? The user sees a certificate with no connection to any known root, which could have been created by anyone. Someone between me and tedunangst.com is malicious, sure, there are a lot of malicious people on the internet, that’s not the ISP’s fault. There’s no accountability.
After thinking of this a few days - I have some comments I’d like to air out.
I would make the argument that this is not necessarily such a good idea - and say that no matter how absolutely correct tedu’s CA cert and his implementation is, how locked down it is, etc., it is extremely bad precedent from a usability perspective to expect most users to start installing new root CA certificates on a regular basis - at least with current OS and browser interfaces. Sometimes the cert will need to be installed in multiple places to take effect globally on a system as well! If this became standard practice we might have hundreds of thousands of new ‘personal’ CA certs to manage and trust and ensure are correctly and securely configured and managed by their owners, etc. It seems like it’s mistakes waiting to happen on all sides.
I would even say if things keep going this way, eventually, somebody “enterprising” is going to come along and offer some new ‘trusted’ service that offers some level of assurance and checks out all these new personal CA’s and manages their certs, and then aren’t we essentially back to square-one, with a CA for personal CAs and browsers not trusting the certs outside the trusted bundles - yes - back to square one.
A better solution to me would be the SSH-model when not using the optional PKI/CA mode, where a previously un-trusted self-signed certificate is presented to the user by the browser for verification and then the key is pinned for future comparisons.
In a related tangent - it would probably make sense for major browsers to immediately - yesterday! - stop giving users TLS/SSL errors when accessing servers using self-signed certificates as long as they are running on local loop-back interfaces. This alone would eliminate a lot of issues.
Thank you for using (and explaining) the name constraint extension. It’s a really useful feature for cases like this one. You point it out as odd that the certificate itself mandates the constraints (instead of the user) and I agree that user control would be an interesting (advanced!) feature here. But with a parent node in a chain of trust, I’d say it makes sense again. Especially given that every certificate you can get these days is not generated by you but for you, with all data and attributes created and modified on your behalf.
Having said that and understanding your reluctance to rely on a third-party (i.e., a real CA) for your availability, I personally can’t find a way to accept this as a valid concern. Isn’t it very hypothetical?
I like to compare to the ssh model, which isn’t perfect either, but is often simpler and fails in more predictable ways. When I add a key to known_hosts, I specify the hostname and the key. But doing so doesn’t automatically mean I trust that key for all the other hostnames embedded within it (of which there aren’t any, but you see the point.) In my opinion, asking a user to inspect a cert and make sure it only does what it says it does is high risk and prone to failure. If you inspect the cert I’ve provided with the right tools, you can asses what it does, though of course I could also misspell (perhaps with Unicode) some fields, or toss an inconspicuous but quite powerful dot in somewhere. Making the user enter the name of the site they trust it for would be much safer. We’ve tried to make things “easy”, but the end result is a system that’s actually incredibly difficult to use safely.
I don’t think my concerns are hypothetical. Not too long ago: https://blog.hboeck.de/archives/886-The-Problem-with-OCSP-Stapling-and-Must-Staple-and-why-Certificate-Revocation-is-still-broken.html
We can point our fingers at OCSP in this case, but I think that’s in sufficiently close proximity to justify concerns about systemic fragility.
I did try LE back when they started, but was rejected because my email was “malformed” which isn’t the problem you think. It wasn’t because I had a plus in it. It’s because I don’t have an A record for my domain. I only need an MX record. So that’s two problems. One observed by lots of people, and a second (quite minor) one that I personally experienced. When people tell me to try it because “it just works” I’m skeptical because I’ve seen it not work. I’m picking on LE, but I have little reason to believe they are outliers in this regard.
I think the problem starts here. Most people seem to be interested in transport encryption and not authenticity, i.e. they care more about not being spied upon than the fact if Bob really is Bob. I’d argue that’s why everybody just says yes to “add key to known_hosts” or “do you want to trust this cert”. But this my opinion as a layman.
What I like about the SSH model is that it comes with cert pinning built-in but then again, I normally have control over boxes I ssh in so I know when host keys changed, but I will never be in a position to know if it’s ok that the cert for e.g. Amazon changed. So what we would need is maybe something like OpenBSD key rotation where my current cert also knows about the next cert and my browser can check if a new cert is actually ok. Question remains how to built trust on your first visit and what about homoglyphs…
I sometimes feel like checking the authenticity of a given 3rd party entity on the internet is a lost cause.
I’d love it if the general public could be relied on to know the difference between transport encryption and authenticity.
For use-cases like “is this the real amazon.com, is this really my credit union”, authenticity continues to be important. I agree that it’s far less important for blogs, or for my own favorite transport-encryption example - not leaking your webmd history to MITMs.
Talking to a fake webmd seems like it could be pretty bad tbh. It might tell you your cancer symptoms are nothing to worry about, or something.
Or give your insurance company evidence to deny a claim for preexisting conditions (fairly or not).
Absolutely but I think most people are more concerned about somebody spying on them than about running into an imposter / MITM thus they click away the error so they can get to the content. Funny enough I think this is a statement about how nice humanity actually is because we are not expecting that a stranger is going to rip us off at first sight.
I think that’s the hacker bubble; most non techies I know are much more frightened of having their credit cards stolen.
I don’t want to leave the impression that I think authenticity is unimportant. But I have grown the impression that our subconsciousness wants to believe imposters are nothing but a product of our fantasies and for good reasons, imagine a world in which we would constantly question the authenticity of the information provided. I doubt it would be a nice place to live in.
Thus I think a solution that involves user interaction is destined to fail. But I am starting to be way off-topic.
I’m also not sure why user-specified and cert-specified would be mutually exclusive. Using the intersection of them would make perfect sense. This way, a root certs can claim it’s valid for anything, but I might want to trust it only for
*.blah.com
.It is somewhat annoying that this sort of scoping is available in my adblocker, but not in my TLS trust model.
From what I understand, most CA’s today wont just cross sign a customer CA though. Doing so would in fact likely get them marked as untrusted in most browsers I imagine. Combined with (based on my readings) somewhat spotty support for name constraints, the best you can hope for today seems to be either flashing lights and klaxons (self signed cert warnings), or hoping for the best and installing/trusting the signing private CA (many corporations do this for internal uses).
One problem with this move to a self-run CA now is that my RSS feeder is unable to fetch the feed, as it correctly does proper cert checks. I would need to hack around to get it working again on a not-run-under-my-control instance on Heroku.
What are proper cert checks?
Signed by a trusted CA in your chain & hostname in the cert itself.
Yeah, the RSS situation is regrettably more complicated than I’d thought. I’m going to punch a hole for it.
I’d say something about depending on services you don’t control, but I think one ideological battle will suffice. :)
@tedu - I may be misunderstanding what’s going on but it seems the root cert doesn’t work with libressl.
Possibly this openssl bug, patched here but it doesn’t look like libressl has that patch.
Yeah, we should probably fix that bug. :( Thanks a lot for tracking down a patch.
Sadly enough the website raise tls error on brave for android.
That’s because Brave behaves just like any other browser when it comes to https certificates.
Or are you saying that you’re still getting an error even after installing tedu’s root CA?
Why I would install some random root CA.
[Comment removed by author]
The article addresses this.
I’m not sure if this is the punchline to the article, or merely an accident, but in both Firefox and Chrome this produces an “unknown issuer” error.
The server appears to be serving a single leaf certificate, which appears to be issued by Ted’s own CA.
On principle, I don’t click through cert errors, so I guess I’ll never know if it’s the punchline or an accident.
This all the point of the article and the broken CA system.
Unknown to who? If it didn’t say the issuer was unknown, who would the issuer be? Would you know who they are?
So you do know who the issuer is. Just your web browser doesn’t. If you know who Ted is and trust him, wouldn’t you trust his certificate? If you don’t know who he is or don’t trust him, if he had gotten a certificate from CA included in Firefox, now it’s ok to trust this Ted guy?
They have you trained well. ;-)
Web browsers have helped complicate the problems of the CA trust model under the guise of usability (and people just don’t want to have to care) by taking control of the user’s trust away from them. Even those who know better and want to handle it themselves.
I appreciate Ted, who has enough traffic to be noticed, going against the established model and demonstrating another method.
Nope. It was issued by someone claiming to be “Ted”. But anyone can do that.
Yes, because I trust the Firefox organization and processes, far more than any lone individual.
What’s missing here, is that @tedu has not shared, out of band, the certificate fingerprints.
Sounds like a good idea except for WoSign, StartCom, Government certs, etc. I had to keep removing these (through a many step process), and they kept coming back with updates, long before Firefox removed them themselves. Given the state of the CA system now, I’m all for Firefox and others helping to curate the list of CA’s but I don’t like this implicit trust without thought that the UX creates and little to no attention to usability of managing that trust as a user.
To put it another way. Firefox is not perfect. Ted is not perfect, no one is perfect. But the design takes away control from me to manage the risks and tries to force (or imply that you can have) 100% trust in Firefox (or browser company X).
SHA256 (ca-tedunangst-com.crt) = 049673630a4a8d801a6c17ac727e015fbf951686cdd253d986e9e4d1a8375cba
That’s posted on the home page. It’s not included in the flak post following the principle that important information should only be maintained in one place. I’ve added a note and a link. That was an oversight.
It’s the hash of the file, not some internal fingerprint, because I find that easier to verify with simpler tools. You don’t even need to decode it to at least verify it’s the same file I say it is.
I’m all for browsers making it easier to manage your own trust roots. But realistically the end user is a lot more imperfect than Firefox most of the time; good defaults are by far the most important part of what the likes of Firefox need to be doing, and I’d actually consider WoSign/StartCom/… a success story - Firefox et al did the right thing, and sent a much stronger message than individuals acting alone ever would. Government surveillance is the kind of thing that requires collective action to counter - uncoordinated individual opposition doesn’t cut it.
Heck, I’m probably one of the most paranoid 0.1% of users, but I never curated my root certificate list. There are only so many hours in the day, I have things to be doing, I’m not going to evaluate an individual CA for every website I go to. At best I’d use a list run by the EFF or someone, but really that someone might as well be Firefox.
I don’t know what he’s proposing, but it’s hard to imagine what advantage it offers that he can’t get by using a CA-signed certificate. Installing the Ted CA doesn’t stop another CA from signing tedunangst.com. It does give Ted the authority to sign certificates for other websites, which I don’t want - 150 root CAs is pretty bad but 151 is still worse. If he has mechanisms for publishing fingerprints out of band, why not just do that with his site’s certificate? If he doesn’t trust the CAs, there are any number of mechanisms - HPKP, DANE,… - for increasing authentication while remaining compatible with the existing CA system, which, for all its flaws, is pretty effective. If he’s not willing to cooperate with the most effective, widely deployed security mechanism then screw him; I can live without his blog, it’s not worth the amount of time it would take me to figure out whether he’s just being awkward or actually wants to compromise my security. If he really wants to run a CA, he can go through the process to get it approved by Firefox; they’re far more capable of doing audits than I am.
I agree with a lot of that, but still disagree on some things but I think I have made those points and don’t want to keep ranting on. :)
I will however agree with this:
I did not add his CA to my list of Authorities for reasons you state (and Ted talks about this in the article). I only accepted the server certificate. Why is that OK and not the CA? Because the day before, I was accessing his site in plain text. Have I been MITMed? Who cares? I could have been getting MITMed for years while going there.
Yesterday there was a hole in the redirect to give some notice of the coming service disruption. Not sure how long I should leave it there however. Or what all should be excluded from https (this page, the cert, the home page?). I opted in favor of strictness for now.
The https is broken on the chrome mobile (iOS 10)