The title is misleading. Android didn’t block anything, and you probably can still modify the certs. They made some internal changes, and now instead of loading the certs from the real filesystem, they’re loaded from a virtual one that’s mounted separately in each mount namespace.
The author didn’t figure out a way about that, but that doesn’t mean it’s now impossible to modify the system certs.
I guess I’ll be staying on 13 for the time being. I’ve used this to reverse and monitor a couple of apps.
Was https-everywhere a tool for the big boys to gain more economical leverage and control?
I can’t help but feel we were all played from the start. “It’s open, come here!” and once it all takes hold, the rig is pulled. This pattern is repeated everywhere though - using “openness” to lure people into working amd promoting for free on their behalf.
I feel it was double-edged. We saw Firesheep showing your messages & passwords on the network along with ISP injecting scripts/ads so obviously it was a good idea to add TLS, but maybe we needed to be okay with self-signed certicates & a more decentralized trust version of the certificate authorities.
maybe we needed to be okay with self-signed certicates
It’s not clear to me how something like that would’ve worked with ISPs and then especially when switching them. There needed to be a clear deterrent from ISPs re-signing traffic.
Unless things have changed a lot since Android 11-ish when I last looked at apex, they are unmodifiable in the same way as an apk is unmodifiable: if you mess with its contents, the signature validation will fail. So you’ll need to re-sign it with your own key, and in this case make your own key acceptable to the apex loader. (Disclaimer: in the end I didn’t need to do anything with them, so can’t speak with full certainty here.)
This certainly means more hoops to jump through, but especially in an emulator image or device with a full-custom Android build it’s not quite “blocks all modification”. Trickery on a production device with transient or selinux-constrained root access may get harder though.
addendum: I think this is a security win for the average user, to the degree that “system certificate stores” can be assumed to provide meaningful security. IMO the real problem is Google being an asshat about actually using the user CA store (and about implementing DANE, but that’s a far wider issue of Google asshattery).
Yes, DNSSEC/DANE is not without its problems as Moxie says in the video. But there are 137 CA certs in the Mozilla root store currently installed on my Ubuntu box, and every one of them could issue a certificate for any host at any time, and it won’t be easy to find out nor to revoke (and that 137 isn’t even counting the sub-CA certs these CAs may have issued). Compromised DNS can add certs only for names under the compromised subtree, and re-taking control of that subtree may well be easier than dealing with a recalcitrant CA.
So DNSSEC/DANE may be poison, but I see it as a lesser poison than the current set “trusted” roots. How many not-yet-revealed e-Tugras and DigiNotars are lurking within those 137+ certs?
The Google post is from 2014 and sorry to say, it reads like an uninformed and outdated opinion piece. DANE without DNSSEC is obviously not a thing. “DNSSEC is undeployable” based on… what? The poster keeps dumping on IPv6 on the side, but Google’s own statistics show it hitting 45% of their traffic a few days ago. You won’t get anywhere with anything if you start off with the attitude “meh it’s not gonna work anyway”.
That issuance will be publicly logged, for anyone to discover, in a Certificate Transparency log. If the CA doesn’t log it, modern browsers won’t accept it as trusted.
and re-taking control of that subtree may well be easier than dealing with a recalcitrant CA.
Citation needed? How?
Say Verisign purposefully issues a couple malicious certificates tomorrow. Under the CA system, at least you can revoke their root certificate. How exactly are you planning on re-taking control of the com. namespace from Verisign? Are you going to get the root involved? What if it’s actually the root that’s compromised? You can’t seriously tell me it’s logistically easier to set up and globally deploy a brand-new DNS root than it is to revoke a root certificate, even one as widely-used as Verisign’s. Who would you even trust to do this? It can’t be ICANN, because by the definition of the problem, ICANN is compromised. By the way, according to Wikipedia Verisign is involved in distributing the root too.
I’m not really trying to defend the CA model here - it’s incredibly flawed and you’re dead on that at least DANE limits the blast radius. CT as a solution also assumes that someone is actually watching the CT logs for false issuances, which is flawed as well. But from where I’m sitting, the scales have tipped and the CA system is now the lesser of two evils.
The poster keeps dumping on IPv6 on the side, but Google’s own statistics show it hitting 45% of their traffic a few days ago.
45% is not that high for a protocol that is 27 years old and has 100% adoption as an expected goal. It also does bode well for a protocol’s deployability - whether in terms of technical merits or in terms of economics - when traffic carriers deploy horrific hacks like NAT (and in some even worse cases, carrier grade NAT) rather than actually roll out the new shiny, and we’re still only at 45% despite the fact that every continent on the planet has been under some form of emergency IPv4 allocation rules for 6+ years.
I’m not trying to dunk on IPv6 to be clear; I really would like to see it easily available everywhere. The requirement for NAT, in my opinion, is a criminally underappreciated contributor to the current dynamics of web infrastructure (where it’s extremely difficult to run things out of a home which naturally incentivizing people to use “cloud” services more). I’m just trying to be realistic. And it sounds like the Google folks are too.
I’m not saying this isn’t super annoying, but this does not block security researchers or reverse engineers in the slightest.
If you’re trying to dump network traffic for a particular app, “all” you need to do is pull the APK, unpack it, change the manifest so it respects the user-controlled certificate store, and repack and resign it. Then load it onto your device.
Again, this isn’t convenient, but neither is rooting your phone, and it’s definitely doable if you know what you’re doing. The downside though is that now you can’t take updates to the app without repeating the process.
for the many forks of Android like GrapheneOS & LineageOS, and for advanced device configuration tools like Magisk and its many modules, it probably spells trouble.
I can’t find any justification that this spells trouble for GrapheneOS (or Lineage), because these forks have full access to the Android AOSP source code, and can make any changes they need to Android 14. Since I’m running Graphene, I doubt I’m affected by this change.
The title is misleading. Android didn’t block anything, and you probably can still modify the certs. They made some internal changes, and now instead of loading the certs from the real filesystem, they’re loaded from a virtual one that’s mounted separately in each mount namespace. The author didn’t figure out a way about that, but that doesn’t mean it’s now impossible to modify the system certs.
edit: Relevant discussion on HN, including a reply from the author of the article.
I guess I’ll be staying on 13 for the time being. I’ve used this to reverse and monitor a couple of apps.
Was https-everywhere a tool for the big boys to gain more economical leverage and control?
I can’t help but feel we were all played from the start. “It’s open, come here!” and once it all takes hold, the rig is pulled. This pattern is repeated everywhere though - using “openness” to lure people into working amd promoting for free on their behalf.
Sad.
I feel it was double-edged. We saw Firesheep showing your messages & passwords on the network along with ISP injecting scripts/ads so obviously it was a good idea to add TLS, but maybe we needed to be okay with self-signed certicates & a more decentralized trust version of the certificate authorities.
It’s not clear to me how something like that would’ve worked with ISPs and then especially when switching them. There needed to be a clear deterrent from ISPs re-signing traffic.
I think everyone excited by the emergence of Google as a “tech power”, myself included, feels this way now :/
Unless things have changed a lot since Android 11-ish when I last looked at apex, they are unmodifiable in the same way as an apk is unmodifiable: if you mess with its contents, the signature validation will fail. So you’ll need to re-sign it with your own key, and in this case make your own key acceptable to the apex loader. (Disclaimer: in the end I didn’t need to do anything with them, so can’t speak with full certainty here.)
This certainly means more hoops to jump through, but especially in an emulator image or device with a full-custom Android build it’s not quite “blocks all modification”. Trickery on a production device with transient or selinux-constrained root access may get harder though.
addendum: I think this is a security win for the average user, to the degree that “system certificate stores” can be assumed to provide meaningful security. IMO the real problem is Google being an asshat about actually using the user CA store (and about implementing DANE, but that’s a far wider issue of Google asshattery).
DANE has serious structural problems and isn’t actually that much better than the CA system. Google is for sure evil, but their reasons for not implementing DANE are pretty sound.
Yes, DNSSEC/DANE is not without its problems as Moxie says in the video. But there are 137 CA certs in the Mozilla root store currently installed on my Ubuntu box, and every one of them could issue a certificate for any host at any time, and it won’t be easy to find out nor to revoke (and that 137 isn’t even counting the sub-CA certs these CAs may have issued). Compromised DNS can add certs only for names under the compromised subtree, and re-taking control of that subtree may well be easier than dealing with a recalcitrant CA.
So DNSSEC/DANE may be poison, but I see it as a lesser poison than the current set “trusted” roots. How many not-yet-revealed e-Tugras and DigiNotars are lurking within those 137+ certs?
The Google post is from 2014 and sorry to say, it reads like an uninformed and outdated opinion piece. DANE without DNSSEC is obviously not a thing. “DNSSEC is undeployable” based on… what? The poster keeps dumping on IPv6 on the side, but Google’s own statistics show it hitting 45% of their traffic a few days ago. You won’t get anywhere with anything if you start off with the attitude “meh it’s not gonna work anyway”.
That issuance will be publicly logged, for anyone to discover, in a Certificate Transparency log. If the CA doesn’t log it, modern browsers won’t accept it as trusted.
Citation needed? How?
Say Verisign purposefully issues a couple malicious certificates tomorrow. Under the CA system, at least you can revoke their root certificate. How exactly are you planning on re-taking control of the
com.
namespace from Verisign? Are you going to get the root involved? What if it’s actually the root that’s compromised? You can’t seriously tell me it’s logistically easier to set up and globally deploy a brand-new DNS root than it is to revoke a root certificate, even one as widely-used as Verisign’s. Who would you even trust to do this? It can’t be ICANN, because by the definition of the problem, ICANN is compromised. By the way, according to Wikipedia Verisign is involved in distributing the root too.I’m not really trying to defend the CA model here - it’s incredibly flawed and you’re dead on that at least DANE limits the blast radius. CT as a solution also assumes that someone is actually watching the CT logs for false issuances, which is flawed as well. But from where I’m sitting, the scales have tipped and the CA system is now the lesser of two evils.
There’s a whole list in the middle of the post? The bit about home routers, for example, doesn’t surprise me at all when the primary reason QUIC is encrypted at the transport level is that middleboxes (like e.g. home routers!) keep breaking when TCP extensions are introduced, meaning that TCP is almost impossible to change.
45% is not that high for a protocol that is 27 years old and has 100% adoption as an expected goal. It also does bode well for a protocol’s deployability - whether in terms of technical merits or in terms of economics - when traffic carriers deploy horrific hacks like NAT (and in some even worse cases, carrier grade NAT) rather than actually roll out the new shiny, and we’re still only at 45% despite the fact that every continent on the planet has been under some form of emergency IPv4 allocation rules for 6+ years.
I’m not trying to dunk on IPv6 to be clear; I really would like to see it easily available everywhere. The requirement for NAT, in my opinion, is a criminally underappreciated contributor to the current dynamics of web infrastructure (where it’s extremely difficult to run things out of a home which naturally incentivizing people to use “cloud” services more). I’m just trying to be realistic. And it sounds like the Google folks are too.
one viable solution nowadays is to decompile, patch and recompile apks. this is easily automated with apk-mitm.
I’m not saying this isn’t super annoying, but this does not block security researchers or reverse engineers in the slightest.
If you’re trying to dump network traffic for a particular app, “all” you need to do is pull the APK, unpack it, change the manifest so it respects the user-controlled certificate store, and repack and resign it. Then load it onto your device.
Again, this isn’t convenient, but neither is rooting your phone, and it’s definitely doable if you know what you’re doing. The downside though is that now you can’t take updates to the app without repeating the process.
I can’t find any justification that this spells trouble for GrapheneOS (or Lineage), because these forks have full access to the Android AOSP source code, and can make any changes they need to Android 14. Since I’m running Graphene, I doubt I’m affected by this change.